Fusion of Panchromatic and Multispectral Image using PCA and Wavelet Transform

DOI : 10.17577/IJERTV8IS110413

Download Full-Text PDF Cite this Publication

Text Only Version

Fusion of Panchromatic and Multispectral Image using PCA and Wavelet Transform

D. Hemasree

PG Student Department Of Electronics and Communications Engineering

Sri Venkateswara University College Of Engineering Tirupati, India

Dr. S. Narayana Reddy

Professor Department Of Electronics and Communications Engineering

Sri Venkateswara University College Of Engineering Tirupati, India

V. Raja Rajeswari

Ph.D Scholar Department Of Electronics and Communications Engineering

Sri Venkateswara University College Of Engineering Tirupati, India

Abstract Fusion of two or more images into a single image would enhance the resolution of the resulting image leading to obtaining a more precise image. Image fusion technique in the field of remote sensing is the process of reconceliating two or more images that are obtained from remote sensing sensors or satellites at different times or different sensors at the same instant. Fusion results in transmission of spectral and spatial information without involving any artifacts. Most fusion techniques are used for spatial enrichment and spectral stability between the images. The objective of this paper is to enlighten about different fusion techniques to compare, analyze and estimate various quality measures for panchromatic and multispectral images. In this paper different fusion methods such as Intensity-Hue- Saturation(IHS),Principal Component Analysis(PCA),Wavelet Transform, and PCA and Wavelet based fusion techniques are performed and evaluated and quality of fused images are also analysed.

Keywords Image fusion, Remote Sensing, Spatial, Spectral, Intensity-Hue-Saturation(IHS), Principal Component Analysis(PCA), Wavelet Transform.

  1. INTRODUCTION:

    The term remote sensing is observing something from a distance like an area or object with the help of a sensors that are placed in an aircraft or a satellite. In remote sensing sensors, the quality of images obtained by two factors: spatial and spectral resolutions. The spatial and spectral resolution of the sensors determines the quality of the images where these two resolutions are related inversly. This inverse relation might be due to several factors those related to design, structural or observational constraints. The two images produced by the remote sensing sensors are multispectral(MS) and panchromatic(PAN) images. The multispectral images have better spectral resolution but poor spatial resolution while the panchromatic images have better spatial resolution but poor spectral resolution. Image fusion is the technique by which panchromatic(PAN) image with high spatial quality and low spectral quality and

    multispectral(MS) image with low spatial quality and high spectral quality are combined in order to get fused image with an excellent spatial and spectral quality or resolution. The image fusion technique can be classified into several types based on whether fusion is done in time domain or frequency domain. The image fusion technique could be classified into spatial domain and spectral domain techniques[2]. many fusion methods such as: Intensity- Hue-Saturation(IHS) transform, Principal Component Analysis(PCA), High-Pass Filtering (HPF) method, Laplacian pyramid and wavelet transform, etc… have been proposed. Although several techniques are determined for image fusion where most commonly used and most popular techniques are based on Intensity-Hue-Saturation(IHS) transformation and principal component analysis(PCA) techniques which are spatial domain techniques. Fusion rule is the important aspect that improves the quality of the image fusion technique. The basic principle of the multiresolution image fusion is that high frequency components present in panchromatic(PAN) images is injected into resampled multispectral(MS) images.

  2. METHODOLOGY:

    1. IHS Fusion technique:

      The multispectral(MS) image is represented in RGB color space, we can separate the intensity (I) and color information, hue (H) and saturation (S), by IHS transform. The I component can be deemed as an image without color information. Because the I component resembles the PAN image, we match the histogram of the PAN image to the histogram of the I component. Then, the I component is replaced by the high-resolution PAN image before the inverse IHS transform is applied. the figure 1 illustrates the block diagram of standard IHS fusion.

      Fig 1: IHS fusion technique.

      The main algorithm illustrated in following steps:

      1. Perform image registration (IR) to PAN and MS, and resample MS.

      2. Convert MS from RGB space into IHS space.

      3. Match the histogram of PAN to the histogram of the I component.

      4. Replace the I component with PAN.

      5. Convert the fused MS back to RGB space.

    2. Brovey transform:

      The Brovey transform (BT) is a numerical fusion method which is based on Chromaticity transform. The brovey transform is one of the panchromatic(PAN)- sharpening technique. It focuses on fusing the images while preserving the colours of the original optical image[7]. The brovey transform is based on spectral modelling and was developed to increase the visual contrast in high and low ends of data's histogram. The brovey transform is a simple method for combining data from different sensors, with the limitation that only three bands are involved. The purpose of this method is to normalize the three bands used for RGB and to multiply the result by any other desired data to add the intensity or brightness component to the image. The resolution merge- brovey transform model is derived from this algorithm

      Successful application of this technique requires an experienced analyst for the specific adaptation of parameters. It is given by :

    3. Gram-Schmidt:

      The Gram-Schmidt orthogonalization procedure is one of the basis for defining a pansharpening method. In the Gram-Schmidt pan-sharpening method, first step is to create a low-resolution panchromatic(PAN) band by computing a weighted average of the multispectral(MS) bands. Next, these bands are decorrelated using the Gram- Schmidt orthogonalization algorithm, treating each band as

      one multidimensional vector. The simulated low-resolution panchromatic(PAN) band is used as the first vector which is not rotated or transformed. The low resolution panchromatic(PAN) band is then replaced by the high- resolution panchromatic(PAN) band and all bands are back-transformed in high resolution.

      The Gram-Schmidt pan-sharpen method in a nutshell has following steps:

      1. Compute a simulated low resolution panchromatic(PAN) band as a linear combination of the n multispectral(MS) bands.

      2. The Gram-Schmidt transformation is performed on the simulated lower spatial resolution panchromatic(PAN) image and the pure low spatial resolution multispectral(MS) band images. This Gram-Schmidt forward transform de-correlates the bands.

      3. The statistics of the higher spatial resolution panchromatic(PAN) image is adjusted to match the statistics of the first transform band resulting from the Gram-Schmidt transformation to produce a modified higher spatial resolution panchromatic(PAN) image to produce a new set of transformed bands.

      4. Reverse the forward Gram-Schmidt transform using the same transform coefficients, but on the high resolution bands. The result of this backward Gram- Schmidt transform is the pan-sharpened image in high resolution[7].

  3. PCA and Wavelet based Image fusion technique:

        1. PCA Fusion technique:

          n alternative to IHS-based method is principal component analysis (PCA). It is found that the multispectral(MS) bands are correlated. The PCA transform can convert the correlated multispectral(MS) bands into a set of uncorrelated components, say PC1, PC2, PC3 The first principle component (PC1) also resembles the panchromatic(PAN) image. Therefore, the PCA fusion scheme is similar to the IHS fusion scheme [9-10,12].The block diagram of PCA fusion scheme is illustrated in figure given below.

          Fig 2: PCA fusion method.

          The main algorithm for PCA fusion is described in steps given below:

          1. Perform IR to PAN and MS, and resample MS.

          2. Convert the MS bands into PC1, PC2, PC3, by PCA transform.

          3. Match the histogram of PAN to the histogram of PC1.

          4. Replace PC1 with PAN.

          5. Convert PAN, PC2, PC3, back by reverse PCA.

          6. Fused output MS image is observed.

        2. Wavelet based fusion:

          In wavelet based fusion, multispectral(MS) and panchromatic(PAN) images are decomposed using wavelet transforms. This multiresolution approach is suited to different resolutions, which allows the image decomposition in different kinds of coefficients. The coefficients from different images are combined to form new coefficients, which after inverse transformation gives fused image [18]. The coefficients are then fused based on fusion rules and IDWT is applied to the fused coefficient map. the figure given below shows the wavelet based fusion scheme

          Fig 3:Wavelet transform fusion method.

          the algorithm for performing wavelet based image fusion scheme are given below:

          1. Perform IR to PAN and MSi, and resample MSi.

          2. Match the histogram of PAN to the histogram of MSi.

          3. Apply DWT to both the histogram-matched PAN and MSi.

          4. Replace the detail sub-images (H1, H2 and H3) of MSi with those of PAN.

          5. Perform IDWT on the new combined set of sub-images.

        3. wavelet based PCA fusion technique:

    The wavelet based principal component analysis(PCA) fusion technique is used for reducing the spectral distortions in the original panchromatic(PAN) and multispectral(MS) images.In the first step the input images are decomposed into their multiscale edge representation, using wavelet transform. The actual fusion process takes place in the wavelet domain, where the fused multiscale representation is built by a pixelby-pixel selection of the coefficients with maximum magnitude. Finally the fused image is computed by an application of the appropriate reconstruction scheme. We replace the first principal component image with stretched panchromatic(PAN) data because the first principle component image has the common information to all the bands.

    Fig 4: PCA and Wavelet based fusion.

    This method includes seven steps:

    1. geometric registration: We use a 3 by 3 weighted mask to enlarge the Landsat TM images such that the size is the same as the SPOT PAN images.

    2. PCA transformation: The PCA is a mathematical transformation that generates new images through the linear combinations of the components of the original images. The transformation generates a new set of orthogonal axes. The new images are represented by these axes and then the components are independent.

    3. histogram matching: Histogram match is used to specify the spectral distribution of the high-resolution image to the same as the low-resolution multi-spectral images.

    4. wavelet decomposition: The result of the MWD decomposition is represented by,

      where n is number of details image.

    5. fusion: We replace S component, content image of the specified PAN image, by the first principal component image that has the same size as S image.

    6. wavelet reconstruction: to reconstruct Y1, the first principal component image of the multi-spectral images, and D, the details images of the specified panchromatic(PAN) image, to the fused image Fnew by the equation

      where n is number of the details image. The process of integrating the wavelet decomposition, fusion, and wavelet reconstruction is called the wavelet based image fusion that replaces the content image of the high-resolution image with the low-resolution multi-spectral image.

    7. PCA inverse transformation: We use the equation to back transform the fused image and the other component images into the original space.

    The equation below called the high-resolution multi- spectral image.

  4. QUALITY ASSESSMENT:

    Quality assessment or evaluation is a process to determine effective quality of the fused image with respect to different quality parameters. The quality metrics used for evaluation of the fusion technique are such as follows:

  5. Correlation Coefficient (CC):

The similarity between the fused and reference images can be calculated using correlation coefficient. CC of unity indicates that both images are same. It is one of the reference quality metrics.It is defined as

here If and Ir are the fused image and reference image. The mean values of If and Ir are _Ifand _Irrespectively. If ;j and Iri;j are the pixel values corresponding to the ði; jÞthpixel of the images If and Irrespectively.

  1. Bias Of Mean(BM):

    BM is the difference between the means of original MS image and fused image(Stanislas de Bethune,1998). The value taken is related to original image mean value and zero is the ideal value of Bias of Mean.

    where BM is Bias of Mean, MS is multispectral image or data and F is fused image.

  2. Peak Signal to Noise Ratio (PNSR):

    The mathematical expression for PSNR is given by

    here Ir represents the reference image and If is the fused image, and j are the row index and column index.

  3. Structural similarity measure (SSIM):

    The structural similarity of the fused image and the reference image is determined using SSIM. It is better than PSNR. A Higher value of SSIM indicates better structural quality and hence better quality.

    here mean values of the reference image Ir and fused image If are denoted by lr and lf respectively, its variance is given by rr 2and rf2 and the covariance of the images is represented by rrf .

  4. Entropy :

    It gives the information content in the image. A Higher value of entropy indicates a higher amount of information present in the image.

    here G is the total number of grey levels and the probability distribution of each level is given by pi.

  5. Universal image quality index(UIQI):

It is used to calculate the amount of transformation of relevant data from reference image into fused image. The range of this metric is -1 to 1. The value 1 indicates that the reference and fused images are similar

  1. EXPERIMENTAL RESULTS AND PARAMETER ASSESSMENT:

    The fusion results are showed in figure, hence the corresponding obtained results and quality assessment are tabulated below.

    Table 1: Values Of Different Parameters Of Fusion Methods Analyzed To Estimate The Quality Of Fused Images.

    The Fig 5 illustrates the corresponding data sets to which the fusion methods are analyzed and quality measures or parameters are assessed and tabulated in Table1 and the following results are shown in figures given below:

    Data Set 1 fused outputs (X1)

    Data Set 2 fused outputs (X2)

    Data Set 3 fused outputs (X3)

    Data Set 4 fused outputs (X4)

    Fig 5: Fused Data Sets(X1,X2,X3,X4).

    The results are obtained by using Remote Sensing data (LANDSAT,SPOT and IKONOS) that collects high spatial resolution panchromatic(PAN) image and multiple multispectral(MS) images with low spatial resolution images[2]. The IHS transform separates the spatial information of the multispectral image as the intensity (I) component. By observing the results, IHS method enhances the detail information of image and Brvey transform normalize the three bands used for RGB and multiplied the result by desired data to add the intensity or brightness component to the image. In gram-schmidt image fusion method the low resolution image is pansharpened where small details in the image are observed. PCA separates the spatial information of the image into the first principal component PC1.PCA method introduce less color distortion, but affect spectral responses of the multispectral data. This spectral distortion is caused due to the mismatch of overlap between the spectral responses of the multispectral image, and the bandwidth of the pan image. As compare to other fusion methods wavelets perform better results.

  2. CONCLUSION:

    In this paper ,we observed different fusion techniques which are applied to the Remote Sensing or satellite data. Many research papers have reported the limitations of existing fusion techniques. Most significant problems observed in the methods are color distortion. To reduce the color distortion and improve fusion quality various fusion techniques have been developed, compared and analyzed. Some of the fusion techniques are utilized in this paper for increasing quality of images or data sets. Various Quality measures are applied to the fused outputs such as PSNR, SSIM, correlation coefficient(CC), Entropy(E), Universal image quality index(UIQI),etc.., are analyzed and studied. The quality of fusion is assessed for different images at

    different levels. Compared to existing fusion methods proposed fusion techniques shows better results.

  3. REFERENCES:

  1. Subramanian.p, Alameluand N.R, Aramudhan. M, "Fusion Of Multispectral And Panchromatic Images And Its Quality Assessment", ARPN Journal of Engineering and applied sciences,Vol.10,No.9,May 2015.

  2. Dheepa.G, Sukumaran.S, "Satellite Image Fusion Technique Using Interaction Of IHS Transform And Contrast Based Wavelet Packets", International Journal of computer applications(0975- 8887), Volume 107-No.9,December 2014.

  3. Aanaes, H.; Sveinsson, J.R.; Nielsen, A.A.; Bovith, T.; Benediktsson, J.A., "Model-Based Satellite Image Fusion," Geoscience and Remote Sensing, IEEE Transactions on, vol.46, no.5, pp.1336, 1346, May 2008. doi: 10.1109/TGRS.2008.916475.

  4. Yakhdani. M. F and Azizi.A,Quality assessment of image fusion techniques for multisensor high resolution satellite images (case study: IRS-P5 and IRS-P6 satellite images), In: Wagner W., Székely, B. (eds.): ISPRS TC VII Symposium 100 Years ISPRS, Vienna, Austria, July 57, 2010, IAPRS, Vol. XXXVIII, Part 7B.

  5. Gonzalez-Audicana, M.; Saleta, J.L.; Catalan, R.G.; Garcia, R., "Fusion of multispectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition," Geoscience and Remote Sensing, IEEE Transactions on , vol.42, no.6, pp.1291,1299, June 2004 doi: 10.1109/TGRS.2004.825593.

  6. P. S. Chavez and A. Y. Kwarteng, Extracting spectral contrast in Landsat Thematic Mapper image data using selective principal component analysis, Photogramm. Eng. Remote Sens., vol. 55, pp. 339 348, March 1989.

  7. M. Gonzáles Audícana and A. Seco, Fusion of multispectral and panchromatic images using wavelet transform- Evaluation of crop classification accuracy, in Proc. 22nd EARSeL Annu. Symp. Geoinformation Eur.-Wide Integr., Prague, Czech Republic, 46 June 2002, T. Benes, Ed., 2003, pp. 265272.

  8. Krista Amolins, Yun Zhang, and Peter Dare, Wavelet based image fusion techniquesAn introduction, review and comparison, ISPRS Journal of Photogrammetric and Remote Sensing, Vol. 62, pp. 249-263, 2007.

  9. Nunez, J.; Otazu, X.; Fors, O.; Prades, A.; Pala, V.; Arbiol, R., "Multiresolution-based image fusion with additive wavelet decomposition," Geoscience and Remote Sensing, IEEE Transactions on, vol.37, no.3, pp.1204, 1211, May 1999 doi: 10.1109/36.763274.

  10. Shruti, Sumit Budhiraja," Performance Analysis of IHS and Wavelet based Integrated Pan Sharpening Techniques", International Journal of Computer Applications (0975 8887) International Conference on Advances in Emerging Technology (ICAET 2016) .

  11. Nisthula P, Mr. Yadhu. R. B"A Novel Method Of Image Fusion Combining PCA, IHS And Integrated RIM For Road Extraction In Satellite Imagery",International Journal of Engineering Research & Technology (IJERT) ISSN: 2278-0181,Vol. 2 Issue 6, June – 2013.

  12. Priya D. Vora,Ms. Neeta Chudasama "Different Image Fusion Techniques and Parameters: A Review" ,International Journal of Computer Science and Information Technologies,Vol. 6 (1) , 2015, 889-892.

  13. Vaibhav R. Pandit ,R. J. Bhiwani, "Image Fusion in Remote Sensing Applications: A Review",International Journal of Computer Applications (0975 8887) Volume 120 No.10, June 2015.

  14. Mohamed R. Metwalli1, Ayman H. Nasr1, Osama S. Farag Allap, and S. El-Rabaie3,"Image Fusion Based on Principal Component Analysis and High-Pass Filter",Conference Paper, DOI: 10.1109/ICCES.2009.5383308 · Source: IEEE Xplore, January 2010.

  15. Changtao Hea, Quanxi Liub*, Hongliang Lia, Haixu Wangb,"Multimodal medical image fusion based on IHS and PCA", 2010 Symposium on Security Detection and Information Processing, © 2010 Published by Elsevier Ltd,2010.

  16. Krista Amolins, Yun Zhang, and Peter Dare, Wavelet based image fusion techniquesAn introduction, review and comparison, ISPRS Journal of Photogrammetric and Remote Sensing, Vol. 62, pp. 249-263, 2007.

  17. Nunez, J.; Otazu, X.; Fors, O.; Prades, A.; Pala, V.; Arbiol, R., "Multiresolution-based image fusion with additive wavelet decomposition," Geoscience and Remote Sensing, IEEE Transactions on, vol.37, no.3, pp.1204, 1211, May 1999 doi: 10.1109/36.763274.

  18. Do, M.N.; Vetterli, M., "The contourlet transform: an efficient directional multiresolution image representation," Image Processing, IEEE Transactions on, vol.14, no.12, pp.2091, 2106, Dec. 2005 doi: 10.1109/TIP.2005.859376.

  19. da Cunha, A.L.; Jianping Zhou; Do, M.N., "The Nonsubsampled Contourlet Transform: Theory, Design, and Applications," Image Processing, IEEE Transactions on, vol.15, no.10, pp.3089,3101, Oct. 2006 doi: 10.1109/TIP.2006.877507.

  20. M. Strait, S. Rahmani and D. Merkurev Evaluation of pan- sharpening methods, 2008.

  21. Myungjin Choi et al, Fusion of multispectral and panchromatic satellite images using the curvelet transform, IEEE Geoscience and Remote Sensing Letters, Vol2, No2, April 2005.

  22. Amro et al. A survey of classical methods and new trends in pansharpening of multispectral images, EURASIP Journal on Advances in Signal Processing 2011, 2011:79.

  23. Shah, V.P.; Younan, N.H.; King, R.L., "An Efficient Pan- Sharpening Method via a Combined Adaptive PCA Approach and Contourlets," Geoscience and Remote Sensing, IEEE Transactions on , vol.46, no.5, pp.1323,1335, May 2008 doi: 10.1109/TGRS.2008.916211.

  24. Wavelet for Image Fusion, Shih-Gu Huang, Graduate Institute of Communication Engineering, and Department of Electrical Engineering, National Taiwan University.

  25. Dong Jiang, Dafang Zhuang, Yaohuan Huang and Jinying Fu (2011). Survey of Multispectral Image Fusion Techniques in Remote Sensing Applications, Image Fusion and Its Applications, Dr. Yufeng Zheng (Ed.), ISBN:978-953-307-182-4,image-fusion- techniques-in-remote-sensingapplications.

Leave a Reply