Fusion of Multispectral and Panchromatic Images using Wavelet Transform

DOI : 10.17577/IJERTCONV6IS15105

Download Full-Text PDF Cite this Publication

Text Only Version

Fusion of Multispectral and Panchromatic Images using Wavelet Transform

Fusion of Multispectral and Panchromatic Images using Wavelet Transform

Chaithra C.C Taranath N.L

    1. ech Student, Dept. of CS&E Asst. Prof, Dept. of CS&E

      AIT, Chikkamagaluru, AIT, Chikkamagaluru,

      Karnataka, India Karnataka, India

      Abstract: Image fusion is the process of merging two or more relevant information into one image. The resulted image will have more information than original images. Multispectral image (MS) is obtained from satellite, Multispectral image having rich spectral information and low spatial resolution. MS have less information which is not suitable for remote sensor application. Panchromatic (PAN) image is one of the types of satellite images. PAN images have more spectral information but low spatial information. In remote sensing application more spatial and spectral information is required, so merging MS and PAN will result in rich spatial and spectral information. Many fusion algorithms are supported to fuse MS and PAN. Some techniques are principal component analysis, discrete wavelet transform, pixel-level image fusion and multi-sensor image fusion. In this paper we discussed about discrete wavelet transform. Qualitative analysis determines the performance of fused image by comparison between original image and resulted fused image. Some qualitative metrics are evaluated using Relative global dimensional synthesis error (ERGAS), Cross correlation (CC) and Spectral angle mapper (SAM). This paper reviews about DWT technique and comparision between DWT and PCA using some quality metrics.

      Keywords: Color space conversion, Interpolation, Wavelet transform and quality metrics.


        Remote sensor can gather the information, area and phenomenon of an object without physical contact of an object. Human eyes are considered as good example of a remote sensing device. Due to the help of sunlight and light bulb human can gather the information about their surrounding objects, Human can sense the object only if the object fall in visible spectrum. Some devices contact the object and measure some value that are not considered as remote sensor for example thermometer. The remote sensor is more specifically used for gathering information of interactions between earth surface materials and electromagnetic energy.

        Energy source sensors can be divided into two main groups they are: passive and active sources. Passive sources capture suns radiation reflected from the imaging location. No radiation is generated by the source itself. Active Sources generate signal of a particular nature which is then reflected from the imaging location and captured back by the source [1]. The received signal provides information about the reflectance properties of the imaging location. In earlier days photography sun was the major energy source, but todays photography can be taken without help of sun light.

        When electromagnetic energy incursions a material, three types of interaction can take place they are reflection, absorption and transmission. For the remote sensor main concern is with the reflection portion because remote sensor senses the objects that are from reflected. How much reflected will vary depend upon the nature of the material and electromagnetic energy.

        The remote sensor provides the product like panchromatic, multispectral, hyperspectral and ultraspectral image. Different band of image occur due to the different electromagnetic spectrum fall on the earth surface. Among this all panchromatic image have rich in spatial resolution, To increase spatial resolution of MS image must be fused with PAN image [2]. Many algorithms are proposed for fusing MS and PAN image in this paper discussed about discrete wavelet transform (DWT).

        This paper is organized as follows: Section II presents a review on existing techniques for fusion of MS and PAN image. Section III describes about methodology. Section IV gives details regarding results and performance metrics. Section V describes about conclusion.


        A brief survey is carried out in order to work with the proposed method and discussed in this section. A Genetic Algorithm (GA) fusion method based on IHS (Intensity Hue Saturation) transform is presented in [3]. GA algorithm assumed that MS image having 4 bands and scale ration between PAN and MS is 1:4. First step is to GA algorithm compute model parameter at degraded scale by factor of 4 and by using original MS and reference image for Q4 maximization. The resulting weights for generalized intensity computation i and injection model parameter gi are used in GA fusion in order to produce MS image at the same spatial resolution of PAN image. Disadvantage of the GA is it introduces small ringing artifacts due to filtering operation for extraction of spatial details in PAN image. Illustrate curvelet transform of fusing multispectral and panchromatic image. Curvlets method is proposed in [4] takes basic elements, which shows high directional sensitivity and highly anisotropic. In curvlet transform representing edge better than wavelets and edges are used for extracting detailed spatial information in an image. Induction scaling technique is presented in [5] for fusing PAN and MS image. Induction method using Smoothing filter based intensity modulation (SFIM). SFIM first upsample the MS image so resulted MS image have same size as PAN image. Next step is adding

        high frequency content of PAN image to MS image or substitution spectral intensity to PAN image.


        The whole stream of the proposed work is partitioned into five modules as appeared in the Figure 1. The five modules are: Acquisition of image, color space conversion, DWT, inverse DWT and performance metrics. The MS and PAN image is given as input MS image is in RGB form first it is converting to HSV color form. Next step is apply DWT on PAN image DWT method decompose the PAN image until resolution become MS resolution. MS image is replaces the Low Low (LL) pass band in PAN image. After DWT method inverse DWT is applied to combine the decomposition components.

        been DWT transformed, it is decomposed into four different frequency bands, one corresponding to the low pass band (LL), and three other corresponding to horizontal (HL), vertical (LH), and diagonal (HH) high pass bands is disused in [6]. MS image is injected to LL pass band.

        1. Inverse discrete wavelet transform

          After DWT technique inverse DWT is apply to combine the sub components the resulted fused image is in the form of HSV color space this converted to RGB form. Then apply Bi-cubic interpolation to resample the fused image.

        2. Performance metrics

          After fusion check the quality of the resulted image over original images. In this proposed work consider some metrics is used to check the quality of the resulted image they are: ERGAS, CC, RMSE and SAM is discussed

          Image database

          Performance metrics

          Acquisition of images

          Inverse DWT

          Color space conversion


          in [7] and [8].

          • Erreur Relative Globale Adimensionnelle de Synthese (ERGAS)

        ERGAS calculate the quality of fused image in term of normalized average error of each band. If the value of ERGAS is low indicate the resulted fused image is same as original image. It is determined in the Eq-5.


        RMS E 2 2

        Figure 1: Architecture diagram for the proposed system.

        ERGAS = 100 dk [1 [

        ] ] (5)

        1. Image Acquisition

          dl n

          i=1 µ

          <>Images are acquired with the help of sensor. The image of the earth is captured by most optical earth observation satellites such as Landsat, Quickbird, IKONOS, Geoeye-I, etc are presented in Google satellite database [6].

        2. Color space conversion

          PAN image is gray scale MS is RGB color form. To fuse PAN and MS convert RGB form MS image to

          • Cross correlation (CC)

            CC indicates the spectral feature similarity between the original and resulted fused image is calculating using Eq-6. If CC value is less than 1 implies increase variation. If CC value is +1 represents similar original and fused image.

            2 C r f

            HSV color form. Using matlab build in function hsv=rgb2hsv(rgb) converts RGB values to appropriate hue,

            CC =



            saturation, and value (HSV) is dicussed in Eq-1,Eq-2,Eq-3

            and in Eq-4.

          • Relative Average Spectral Error (RASE)

            = cos1 [



            1 ] (1)

            RASE are commonly used metrics to check the spectral quality of the resulted fused image. RASE


            H (Hue) = {

            360 >

            } (2)

            metrics uses spectral RMSE and average brightness

            (M) is calculating using Eq-7.

            S (Saturation) =1- 3


            [min R,G,B] (3)


            = 100 1



            V (value) = 1 ( + + ) (4) 3

        3. Discrete wavelet transform

          The wavelet transform will divide the images into low and high frequency components. The main idea of DWT in image process is to multi differentiate decompose the image into sub image of different spatial domain and independent frequency district. After that transform the coefficient of sub-image and when the original image has

          Where M is mean radiance of the N spectral bands

          () of original MS bands.

          • Spectral Angle Mapper (SAM)

          SAM indicates the spectral similarity between original image and fused resulted image is calculating

          the angle between spectra. It takes images as vectors and finds the angle between them is define in Eq-8. If SAM value is 0 indicates there is no spectral distortion in fused image.



          ) (8)


          Where I{i}, J{i} are scalar products. If SAM value is equal to 0 it denotes absence of spectral distortion.


        The proposed method takes totally 20 image set from (Google satellite database) for analysis purpose. Among them, some images are disturbed during acquisition or from cloud. Each image will undergo color space conversion, DWT and inverse DWT. Figure 2 shows results of image fusion using DWT technique. The PAN and MS image of size 1024*1024 and 126*126 is provided as input. The resulted fused image provides resolution of 1024*1024 multispectral image.

          1. MS image (b) PAN image

        (c)RGB to HSV (d) inverse DWT

        (e) Interpolation (f) fused image

        Figure 2: Output results for image fusion using DWT technique.

        Table 1: comparison of PCA and DWT with use of performance metrics


        DWT based fusion

        PCA based fusion













        Figure 3: Comparison graph of PCA and DWT technique.

        The Figure 3 shows the comparison graph of PCA and DWT technique. Different result obtained for PCA and DWT technique. Compare to PCA DWT give better result is shown in Table1. During fusion PCA not get full spectral information as original image but DWT get better spectral information [9]. This can be analysed using performance metrics.


The fusion of high spectral resolution of MS image and high spatial resolution of PAN image resulted fused image has combination of high spatial and spectral resolution image. Image fusion using DWT technique provides better result compare to PCA technique is checked using performance metrics [10]. The metrics ERGAS provide 25.9 for DWT and 32.3 for PCA fusion the low value indicate fused image same as original image. The RMSE value for DWT is 0.08 and for PCA 0.14 low RMSE value indicate resulted fused image having high spectral quality.

Future enhancement

The future scope is to apply the weighted fusion technique on remote sensor image to detect objects.


[1] J. Nichol, and M.S. Wong, Satellite remote sensing for detailed landslide inventories using change detection and image fusion, International Journal of Remote Sensing, 26, Number 9, 1913

1926, 2005.

[2] C. Pohl, J.L. Genderen, Multi-sensor Image Fusion in Remote Sensing: Concepts, Methods and Applications, International Journal of Remote Sensing, Vol 19, No. 5, 1998, pp. 823-854.

[3] Tian Hui and Wang Binbin, Discussion and Analyze on Image Fusion Technology, IEEE, Second International Conference on Machine Vision (ICMV09), 2009, pp. 246 250

[4] J. J. Lewis, J. Robert, O. Callaghan, S. G. Nikolov, D. R. Bull and

N. Canagaraja, Pixel- and region-based image fusion with complex wavelets, Information Fusion, Elsevier, vol. 8, pp. 119- 130, 2007.

[5] Y. Li and V. Ragini, Multichannel Image Registration by Feature-Based Information Fusion, IEEE Transactions on Medical Imaging, vol. 30, no. 3, pp. 707-720, 2011.

[6] Rong Wang, Fanliang Bu, Hua Jin, Lihua Li, A Feature Level Image Fusion Algorithm Based on Neural Network, IEEE, The first international conference on bioinformatics and biomedical engineering, ICBBE2007, pp. 265-280.

[7] RajaKumari K1, Anu Priya S, Survey on contourlet based fusion techniques for multimodality image fusion, International Journal of Computer Engineering and Applications,Volume IX, Issue III,

March 15 www.ijcea.com ISSN 2321-3469

[8] Gemine Vivone, Luciano Alparone, Jocelyn Chanussot, Mauro Dalla Mura, A Critical Comparison Among Pansharpening Algorithms, IEEE Transactions on Geoscience and Remote Sensing, Vol. 53, No. 5, May 2015, pp. 2565 2586

[9] Andrea Garzelli. Pansharpening of Multispectral Images Based on Nonlocal Parameter Optimization, IEEE Transactions on Geoscience and Remote Sensing, Vol. 53, No. 4, April 2015, pp. 2096 2107

[10] Changtao He, Quanxi Liu, Hongliang Li and Haixu Wang, Multimodal medical image fusion based on IHS and PCA, Symposium on Security Detection and Information Processing, Procedia Engineering 7, Elsevier Ltd., 2010, pp. 280 285

Leave a Reply