A Survey on Hyperspectral Image Compression Techniques

DOI : 10.17577/IJERTCONV1IS04037

Download Full-Text PDF Cite this Publication

Text Only Version

A Survey on Hyperspectral Image Compression Techniques

Prasanna Bagdalkar#1, Dr. Vipula singh*2

# M.Tech in VLSI Design and Embedded Systems Dept of ECE, RNSIT, Bengaluru

*Professor & HOD

Dept of ECE, RNSIT, Bengaluru

1prasanna4life@gmail.com

Abstract

In this paper, we survey the most recent research efforts on various novel Hyperspectral image compression techniques. The development of new generation remote sensor has led to the emergence of hyperspectral imaging which is basically a 3D data cube containing both spatial and spectral domain information. This huge amount of data generated by advanced imaging imposes significant constraint on bandwidth and storage capacity for transmission and on-board processing, under these circumstances the role of compression becomes crucial. The motivation for this work is derived from the increased interest in field of hyperspectral sensors in the intelligence, surveillance and reconnaissance missions of military and earth environmental studies

  1. Introduction

    The most significant recent breakthrough in remote sensing has been the development of hyperspectral sensors. Hyperspectral sensors are capable of generating very high dimensional imagery through the use of sensor optics with a large number of (nearly contiguous) spectral bands, providing very detailed information about the sensed scene. From a remote sensing perspective, the spatial and significantly improved spectral resolutions provided by these latest generation instruments have opened cutting-edge possibilities in many applications, including environmental modelling and assessment, target detection and identification for military and defence/security purposes, agriculture and monitoring of oil spills and other types of chemical contamination, among many others.

    Over the past decade hyperspectral image analysis has matured into one of the most powerful and fastest growing technologies in the field of remote sensing. Hyper-spectral imaging is basically known as imaging spectroscopy, that combines the power of digital

    imaging and spectroscopy. Hyperspectral images provide much more detailed information about the scene than a normal colour camera image, which only acquires three different spectral channels corresponding to the visual primary colours red, green and blue. In a hyperspectral image each pixel acquires the light intensity (radiance) for a large number (typically a few tens to several hundred) of contiguous spectral bands. Hence, hyperspectral imaging leads to a vastly improved ability to classify the objects in the scene based on their spectral properties. As every pixel in the image contains a continuous spectrum (in radiance or reflectance) this can be used to characterize the objects in the scene with great precision and detail.

    Because of their potential, remote sensing hyperspectral sensors have been incorporated in different satellite missions over recent years like the currently operating Hyperion on NASAs Earth Observing-1 (EO-1) satellite1 or CHRIS sensor on the European Space Agency (ESA)s Proba-1. Furthermore, the remote sensing hyperspectral sensors that will be allocated in future missions will have enhanced spatial, spectral, and temporal resolutions, which will allow capturing more hyperspectral cubes per second with much more information per cube. For example, It has been estimated by the NASAs Jet Propulsion Laboratory (JPL) that a volume of 15 TB of data will be daily produced by short-term future hyperspectral missions like the NASAs HyspIRI. Similar data volume ratios are expected in European missions such as Germanys EnMAP3 or Italys PRISMA.4 unfortunately, this extraordinary amount of information jeopardizes the use of these last-generation hyperspectral instruments in real-time or near-real-time applications, due to the prohibitive delay in the delivery of Earth Observation payload data to ground processing facilities. Further it has been announced by NASA that

    data rates and data volumes produced by payloads continue to increase, while the available downlink bandwidth to ground stations is comparatively stable.

    Fig. 1 Basic structure of Hyperspectral image representing both spatial and spectral information

    1. BACKGROUND

      In image compression, there are two primary categories of algorithms: lossless and lossy. Lossless image compression is one in which the original image can be reconstructed exactly from the compressed image is the ideal. Lossy compression schemes discard some amount of data in an image, but can achieve much higher compression rates indeed, with many algorithms, the user can choose exactly how large they would like the resulting compressed image to be. The difference between the original image and one reconstructed from lossy compressed data constitutes the compression error, and different applications for the images have different levels of tolerance for amount and type of introduced error. But in the case of Hyper- spectral images lossless compression scheme is preferred to prevent any loss of information that might occur due to lossy compression. Basically the purpose of lossless compression is to represent the data using a minimum number of bits by reducing the statistical redundancies inherent to the data that lies within hyper- spectral images.

    2. Organization of paper

      In the previous section the basic structure of Hyperspectral images is explained in detail. The remainder of the paper is organized as follows, section II provides a complete overview of each of the 4 different methods of hyperspectral compression techniques, section III provides a discussion on various trade-off confronted by each compression techniques based on experimental results achieved by each of these compression techniques by which the performance of compression schemes is measured.

      1. HYPERSPECTRAL IMAGE COMPRESSION TECHNIQUES

          1. Improved KarhunenLoêve Transform for Remote-Sensing Image Coding

            The Karhunen-Loeve (KL) transform is a statistical transformation that can be used to generate a set of uncorrelated variates from an analog signal specified by its autocorrelation function. A detailed discussion of one- and two-dimensional KL transform is in References [1] and [2]. Basically a continuous KL transform eliminates the need for scanning and sampling the continuous imagery and generates the uncorrelated ordered samples directly from the analog data. This approach, although simple and attractive, is almost impossible to implement. This is because of the following considerations

            Solution to the integral equation required to solve for the eigen functions is only known for specific types of autocorrelation functions. Samples can be generated from analog imagery using analog filters with impulse response function. These filters are very difficult to implement In order to overcome these problems a new technique is adopted into the KL transform. Through the multilevel clustering approach computational cost of KLT having high spectral de-correlation property has been reduced & scalability has also been increased.

            2.1.1 Multilevel clustering:

            In this multilevel clustering structure, the 3 main sources that contribute to computational cost of KLT are covariance matrix calculation forward and reverse applications of the transforms. The resultant effective computational complexity of the above operations can be expressed by O(n2) where n represents the number of spectral components. In order to reduce the computational cost large transform is divided into multiple clusters and tranform is applied to smaller clusters drawback of this approach is that it provides local decorrelation within each cluster. So global decorrelation can be achieved by including additional transform stage such that it decorrelates only the most important parts. The multilevel clustering scheme is shown in figure 2. The multilevel clustering KLT is defined by notation C={CLEVEL,INDEX} such that CL is

            the set of cluster in level L of structure and Index being position at that level L As an example consider C is cluster-set C ={c1,1,c1,2,c1,3,c2,1,c2,2,c3,1} for each cluster, let Size(i,j) be the size of the cluster and N(i,j) is the number of elements in the said cluster that proceeds to the next level. For instance, if number of element in 2nd cluster of level 1 are 5 & out of which only 2 values are send for next level, then they can be given as S(c1,2 )= 5 and E(c2,1)= 3 as shown in figure.2

            Fig. 2 The figure shows Multilevel clustering technique

            Specification can be divided into two kinds: determining the size of each cluster in the structure and setting the threshold using Eigen thresholding methods to decide how many components are to be forwarded to the next stage as shown in figure 2. Multilevel clustering is two stage process: In first stage, several candidate structures are generated for each specific situation through local search and Eigen thresholding methods. There are different Eigen thresholding methods either static or dynamic, like Screen test [3], Average Eigen value (AE), Empirical Indicator Function (EIF) [4]. If thresholds are known a priori, the result is a fully static structure, where the whole structure can be chosen before it is applied and does not vary with different images i.e. we set same value for each cluster. On the other hand, a dynamic structure is produced if either AE or EIF thresholding method are used to set the threshold individually for each cluster without any training. And then, candidates are further screened to select the best clustering configuration producing a list of different candidate structures. To reduce these candidate structures three constraints are added viz. cluster size homogeneity, non-regularity of structure and structure having more number of clusters at top level. Once a list of candidate structures has been generated, each Candidate needs to be tested against the others to determine the most suitable one. The suitability of a candidate structure will be determined by three criteria:

            • Quality is evaluated with the signal-to-noise ratio of tested structure.

            • Cost is calculated counting the total floating-point operations required for applying and removing a transform.

            • Scalability is evaluated measuring the dependences to decode only one component.

          2. Anomaly based JPEG2000 Compression Technique

            In this proposed work, initially the pixels are extracted before compression these are termed as anomalous pixels then the pixels are replaced with interpolation from surrounding non-anomalous pixels. Further the resultant image is encoded using Principal component analysis technique for spectral de- correlation which is followed by JPEG 2000 [5]. In the work anomalous pixels do not participate in lossy compression and are transmitted in a lossless fashion, such that upon decoding the anomalous pixels can be inserted back into the image. It has been shown [6],[7] that PCA in conjunction with JPEG2000 can provide superior rate-distortion performance for hyperspectral image compression, where PCA provides spectral decorrelation prior to the application of JPEG2000 to the resulting principal component (PC) images ( refer to this as PCA+JPEG2000). In particular, PCA+JPEG2000 out performs DWT+JPEG2000, the corresponding strategy that uses a discrete wavelet transform (DWT) for spectral decorrelation. In this sense, spectral decorrelation is critical for hyperspectral compression, and PCA outperforms the DWT in this respect.

            1. ANOMALY-ADJUSTED COMPRESSION

              In this method a procedure to preserve anomalous pixels in compression is proposed. Firstly RX algorithm [8][9] is applied to detect anomalous pixels within hyperspectral image. Next to exploit data redundancy within an image spectral decorrelation technique is employed through PCA. Now, the identified anomalous pixels are adjusted by mean removal, i.e., the anomalous pixels are averaged, and this resulting mean is subtracted from each anomalous pixel. Finally JPEG2000 is applied to the entire image. In AA scheme, the anomalous pixel mean is transmitted losslessly that too separately from rest of data. Upon decoding anomalous pixel mean is restored to each of the anomalous pixels resulting in improved spectral fidelity of the anomalous pixels.

              This AA approach in [10] is premised on the assumption that, the anomaly pixels belong to a single class that shares the same statistics, specifically the same mean vector which means that although different from their surrounding pixels, the anomalies are themselves rather similar. The drawback of AA

              technique is that depending on the dataset, sometimes this assumption holds, but sometimes it does not. In such latter cases, the AA approach has difficulty preserving the anomaly pixels.

            2. ANOMALY-REMOVED COMPRESSION

              In this approach the author proposed to completely remove anomalies prior to compression in contrast to previously discussed AA technique. In AR approach in [10], anomaly detection, such as the RX algorithm [8][9] is applied first to identify anomalous pixels. These pixels are then extracted from the dataset and transmitted (losslessly) independently of the remainder of the dataset. In order to compress the rest of the image, the anomalous pixels in the original dataset are replaced by values interpolated from neighbouring pixels. Specifically, for an isolated anomalous pixel vector, the anomaly is replaced with the average of the eight pixels immediately surrounding it spatially. For larger regions, the entire anomalous region is replaced by the average of the non-anomalous pixels calculated along the boundary of the region. Since this spatial- averaging interpolation is a simple form of low-pass filtering, high spatial frequency components produced by anomalous pixels tend to be suppressed, leading to increased compression efficiency. The PCA spectral transform is then calculated and applied to the resulting dataset, followed by PCA+JPEG2000 or Sub- PCA+JPEG2000 coding. After decoding, the original anomalous pixels are inserted back into the reconstructed image. In order to permit restoration of the anomalous pixels after decompression, several items of ancillary data are required to be provided by the encoder separate from the JPEG2000 compressed bit stream. In experiments, this proposed work represents each anomalous pixel vector (uncompressed) using 16 bits per vector component; for anomaly locations, further the row and column indices are represented using 9 bits each. Although this ancillary information is technically independent from the JPEG2000 bit-stream, it can be embedded directly into a JP2- or JPX-format compressed file with one or more UUID blocks which are designed to carry application- specific user data.

          3. UNIFIED LOSSY and NEAR-LOSSLESS COMPRESSION BASED ON KLT+JPEG2000

        In this proposed work a compression algorithm featuring both lossy and near-lossless compression for hyperspectral images had been implemented. As the algorithm is based on JPEG 2000 it provides better near-lossless compression performance than 3D- CALIC. In this work, the author proposes two key aspects. First, hyperspectral images are considered, which are 3-D data and contain a significant degree of

        spectral correlation, this heavily affects the entropy coder design. Second, the rate of the lossy layer is choosen in order to minimize the overall rate [11].

            1. Prediction Loop for Near-Lossless Compression

              The proposed algorithm is based on anon-causal DPCM scheme, as shown in Fig. 3. the original hyperspectral image is denoted as I and the prediction as IL. The prediction is computed as follows. First the original image is sent as input to the lossy compression algorithm in [12], with a desired target rate. Secondly encode and decode the image and use the reconstructed image IL as a predictor. In order to decode the near- lossless layer, the complete lossy layer has to be decoded first. The residual image is obtained as e = I IL. Uniform scalar quantization is applied to e as follows:

              (1)

              The coefficients eQ are eventually decorrelated and entropy coded as described in Section 2.4.3

              At the decoder, the predictor IL is recovered by extracting lossy layer from the compressed file and decoding it. Then the near-lossless layer is entropy decoded and inversely predicted to yield eQ. The near- lossless reconstructed image IR is obtained as IR = IL + (2 + 1) eQ. Considering that quantization step size for e is 2 + 1, a maximum absolute error is guaranteed between IR and I.

              Fig. 3.architechture of lossy and lossless compression algorithm using KLT+JPEG2000

            2. Lossy Compression Stage

              To obtain IL, the proposed scheme employs the algorithm in [12], which exploits the multicomponent

              transformation (MCT) feature of JPEG 2000 part 2 in [13], to take maximum advantage of the spectral correlation present in hyperspectral images.

            3. Coding of the Residual Image

              A key component of the proposed scheme lies in the entropy coding stage that has the purpose to yield a compact description of the residual image eQ spatially. Therefore, arithmetic coding of eQ as done in [11] can be highly suboptimal, and lead to poor bit-rates. But main objective was to obtain the minimum total bit-rate for a given ; this typically leads to slightly smaller lossy layer rates that leave some residual spatial correlation. Since eQ exhibits some residual spatial correlation, entropy coding of eQ must be preceded by a spatial de-correlation stage. so 2-D version of CALIC as decorrelator and entropy coding stage for the residual layer is inhabited by this proposed work.

            4. Allocation of Lossy Layer Rate

        Given that the user specifies a maximum absolute error for the residual near-lossless layer, to optimally select the bit rate of the lossy layer. It can be seen that,

        • At small lossy layer rates, the rate required to encode the residual layer is high, making the total rate high. Increasing the lossy layer rate provides better trade- offs.

        • At high lossy layer rates, the residual image becomes quite noisy, again increasing the total rate. However, there is a rather broad region of rates that provides near-optimal performance.

          1. Lossy-to-Lossless Compression Using the 3D Embedded Zero Block Coding Alogrithm

            In this work, lossy to lossless hyperspectral image compression coder employing a Three- Dimensional Embedded Zero Block Coding (3D EZBC) algorithm is proposed. To exploit data redundancy within hyperspectal image through decorrelation three-dimensional integer wavelet packet transform with unitary scaling is adopted. More specifically 3D EZBC algorithm without motion compensation to process bit-plane zero block coding is inhabited.

            1. Three-Dimensional Integer Wavelet Transform

              To realize lossy-to-lossless image compression based on wavelet transform, the integer-based lifting scheme is an indispensable tool. Basically it requires 3 steps to perform the reversible integer-to-integer wavelet transform namely split, predict and update by

              rounding each filter output [14][15]. There are many diversified 3D wavelet transform structures [14][16][17] according to the different order of decomposition in the spatial-horizontal, spatial-vertical, and spectral-slice directions. For achieving better lossy coding performance, a simple approach via bit shifting of wavelet coefficients is inhabited to make the integer WT approximately unitary. This unitary scaling structure can obtain not only the better lossless performances, but also the excellent integer based lossy performances

              Fig. 4.a the spatial scaling factors

              .

            2. Hyperspectral Image Lossy-to-Lossless Coder based on the 3D EZBC Algorithm

        The Embedded ZeroBlock Coding and context modeling (EZBC) algorithm proposed by Hsiang and Woods is a state-of-the-art image compression algorithm using two powerful embedded techniques — the hierarchical set-partitioning zeroblock coding and the context based adaptive arithmetic coding [16].The 3D EZBC coder provides not only lower computational complexity and excellent compression performance, but also the various features with quality, resolution and temporal scalability [17]. 3D EZBC is an embedded zero block bit-plane coding algorithm by effective utilizing the energy clustering nature within subbands and strong dependency across subbands.

        The complete coding procedure can be summarized as the following three steps:

        First a hierarchical pyramidal structure is determined for hyperspectral image through 3D integer wavelet packet transform with Fig.4.as unitary scaling structure.

        Second, the 3D EZBC coder has to establish a quadtree representation structure with the hierarchical pyramidal model for each individual 2D subband before the set partitioning bitplane coding process starts. Here 3D EZBC adopts bitplane coding to progressively encode the wavelet coefficients of each subband from the Most Significant Bit (MSB) plane toward the Least Significant Bit (LSB) plane.

        Finally, coding performance is improved further with the context-based adaptive arithmetic coding approach by 3D EZBC to encode the significance map, signs and refinement bit streams. 3D EZBC exploits two statistical dependencies the intra-band correlation among quadtree nodes at the same quadtree level in subband and the inter-band correlation among quadtree nodes across subband.

      2. DISCUSSIONS

        In the proposed work in section 2.1, the Lossless compression rates are reported, where it is seen that little impact is produced by the use of the static clustered approach but for dynamic cluster approach it produces larger bit streams, which have CRs between the Reduced KLT and IWT. The static transform allows applying KLT with very reasonable resource constraints. On the other hand dynamic transform can be taken as a direct replacement of the DWT for spectral coding, improving the DWT in all the three measured criteria: quality, cost, and scalability.

        In the proposed work in section 2.2, SNR for data fidelity has been employed since it is widely used for assessment of rate-distortion performance. For both AA and AR, anomaly detection results can be retrieved directly from the compressed bit stream since the anomaly locations are transmitted losslessly. The post- compression anomaly detection conducted is intended simply as a means to objectively evaluate how well anomalies can be extracted from the reconstructed images. Further even in the case that anomalies are perfectly preserved, some anomalies may fail to be detected while some background pixels may produce false alarms due to compression effects on the background.

        In the proposed work in section 2.4, the author evaluate the near-lossless compression performance of the proposed algorithm and compares it with other existing algorithms. The lossy layer rates of 0.25, 0.5, 0.4, and 0.5 bpppb for Cuprite, Jasper Ridge, Moffett Field, and Purdue Indian Pines, respectively are selected. As benchmark, the near-lossless version of 3D-CALIC is employed. The bit rates achieved for near-lossless compression of sample hyperspectral images such as cuprite, jasper ridge and moffett field

        are 3.60, 3.61 and 3.69 respectively. It can be seen that, for smaller compression, the proposed scheme achieves a bit rate that is significantly low without any loss of information.

        ig.5. Architecture of Lossy to lossless coder using 3D Embedded Zero Block Coding

        achieve lossless compression with higher compression ratio taking into account compactness of representation, speed and cost in terms of precessing time and number of computations required. From the observations made so far the algorithms suitable for satellite on-board compression, either lossless or near-lossless should have favorable characteristics such as low complexity, low power and storage requirements

        Fig. 4.b the spectral scaling factors

        Fig. 4.c shows 3D integer WPT structure of two spatial levels and two spectral levels. The numbers on the front upper left corner for all the subbands indicate the initialization order of LINk list for 3D EZBC as proposed by Xiongs in [14].

        In the proposed work in section 2.5, lossless compression performances use bits per pixel per band (bpppb) to evaluate compressed data streams size .the coding experiments are performed on four signed 16-bit radiance AVIRIS hyperspectral images namely Cuprite scene 1, Jasper Ridge scene 1, Low Altitude scene 1 and Lunar Lake scene 1. Experimental results obtained in this proposed scheme validate that 3D EZBC outperforms 3D SPECK, 3D SPIHT and AT-3D SPIHT. The

        average compression ratio of 3D EZBC is 5.70 % lower than 3D SPECK, 7.14 % lower than 3D SPIHT, 4.96 % lower than AT-3D SPIHT, and 1.07

        % higher than JPEG2000-MC.

      3. CONCLUSION

In this paper, recent development in the area of hyperspectral image compression techniques have been presented. All the technique which have been reviewed in this work provides a clear vision to

last but not least capability of working on Raw images i.e uncalibrated data as they are provided by advanced imaging sensors since a performance ranking of compression algorithms may be different on raw uncalibrated images. Thereby allowing raw data to be compressed

References

  1. T. T. Y. Huang, and P. M. Schulthesiss, Block Quantization of Correlated Gaussian Random Variables, IRE Transactions on Communication Systems, Vol. CS-11, No. 3, pp. 289-296. 1963

  2. A. Habibi, and P. A. Wintz, Image Coding by Linear Transformation and Block Quantization, ZEEE Trnnsactions on Communication Technology, Vol. COM-19, No. 1, pp. 50-63, February 1971.

  3. R. B. Cattell, The scree test for the number of factors, MultivariateBehav. Res., vol. 1, no. 2, pp. 245

  4. E. R. Malinowski, Determination of the number of factors and the experimentalerror in a data matrix, Anal. Chem., vol. 49, no. 4, pp. 612617,Apr. 1977.

  5. Information TechnologyJPEG 2000 Image Coding SystemPart 1: Core Coding System, 2000.

  6. J. E. Fowler and J. T. Rucker, 3D wavelet-based compression of hyperspectral imagery, in Hyperspectral Data Exploitation: Theory and Applications, C.-I Chang, Ed. Hoboken, NJ: Wiley, 2007, ch. 14, pp. 379407.

  7. B. Penna, T. Tillo, E. Magli, and G. Olmo,

    Transform coding techniques for lossy hyperspectral data compression, IEEE Trans. Geosci. Remote Sens., vol. 45, no. 5, pp. 14081421, May 2007.

  8. B. Penna, T. Tillo, E. Magli, and G. Olmo,

    Hyperspectral image compression employing a model of anomalous pixels, IEEE Geosci.

  9. I. S. Reed and X. Yu, Adaptive multiple-band CFAR detection of an optical pattern with unknown spectral distribution, IEEE Trans. Acoust., Speech, Signal Process., vol. 38, no. 10, pp. 17601770, Oct. 1990.

  10. C.-I Chang and S.-S. Chiang, Anomaly detection and classification for hyperspectral imagery, IEEE Trans. Geosci. Remote Sens., vol. 40, no. 6, pp. 1314 1325, Jun. 2002Sens. Lett., vol. 4, no. 4, pp. 664668.

  11. S. Yea and W. A. Pearlman, A wavelet-based two- stage near-lossless coder, IEEE Trans. Image Process., vol. 15, no. 11, pp. 34883500, Nov. 2006.

  12. B. Penna, T. Tillo, E. Magli, and G. Olmo,

    Transform coding techniques for lossy hyperspectral data compression, IEEE Trans. Geosci. Remote Sens., vol. 45, no. 5, pp. 14081421, May 2007.

  13. E. Christophe, D. Léger, and C. Mailhes, Quality criteria benchmark for hyperspectral imagery, IEEE Trans. Geosci. Remote Sens., vol. 43, no. 9, pp. 2103

  14. Z. X. Xiong, X. L. Wu, S. Cheng and J. P. Hua,

    Lossy-to-Lossless Compression of Medical Volumetric Data Using Three-Dimensional Integer Wavelet Transforms, IEEE. Medical Imaging, vol. 22, 2003.

  15. M. D. Adams and F. Kossentini, Reversible Integer-to-Integer Wavelet Transforms for Image Compression: Performance Evaluation and Analysis, IEEE Trans. Image Processing, vol. 9, 2000, pp. 1010- 1024.

  16. X. Tang and W. A. Pearlman, Three-dimensional wavelet-based compression of hyperspectral images, Chap. 10 in Hyperspectral Data

    Compression, G. Motta, F. Rizzo, J. A. Storer, Eds., MA: Kluwer Academic Publishers, 2006, pp. 273-308.

  17. J. E. Fowler and J. T. Rucker, 3D Wavelet-Based Compression of Hyperspectral Imagery, John Wiley & Sons, Inc., Hoboken, NJ, 2007, pp. 379-407.

  18. http://aviris.jpl.nasa.gov/html/aviris.overview.html

Leave a Reply