Analysis and Implementation of Cholesky Decomposition Technique Based Pixel Level Colour Image Fusion

DOI : 10.17577/IJERTCONV2IS13121

Download Full-Text PDF Cite this Publication

Text Only Version

Analysis and Implementation of Cholesky Decomposition Technique Based Pixel Level Colour Image Fusion

Shruti Joshi

Dayananda Sagar College of Engineering, Dept. of Instrumentation Tech,

VTU, Belgaum, Karnataka, India

shrutijoshi06@gmail.com

Abstract: Image fusion is process of merging or combining relevant information of two or more images into a single image. The resultant image retaining the important features improves the signal to noise ratio from each of the original images. Image fusion has attracted a lot of interest in recent years. As a result, different fusion methods have been proposed mainly in the fields of remote sensing and computer vision (Eg: Night vision). In this paper, we study a fusion technique called Cholesky decomposition technique which is a linear pixel-level fusion method is employed that is suitable for remotely sensed data. Technique employs these modules- Covariance estimation, Cholesky decomposition and Transformation. We are writing the code in MATLAB software. The color properties of the final fused color can be selected by the user in a way of controlling the resulting correlation between colour components. This technique is used in many real time applications include Space imaging in air borne vehicles, in medical applications, Standard night vision applications, Defence applications, Surveillance applications etc.

  1. INTRODUCTION

    The term fusion means in general an approach to extraction of information acquired in several domains. The goal of image fusion (IF) is to integrate complementary multisensor, multitemporal and/or multiview information into one new image containing information the quality of which cannot be achieved otherwise. The term quality, its meaning and measurement depend on the particular application.

    Fusion types:

    Image fusion has been used in many application areas. In remote sensing and in astronomy, multisensory fusion is used to achieve high spatial and spectral resolutions by combining images from two sensors, one of which has high spatial resolution and the other one high spectral resolution. A large number of different image fusion methods have been proposed mainly due to the different available data types and various applications. Numerous fusion applications have appeared in medical imaging like simultaneous evaluation of CT, MRI, and/or PET images. Plenty of applications which use multisensor fusion of visible and infrared images have appeared in military,

    security, and surveillance areas. In the case of multiview fusion, a set of images of the same scene taken by the same sensor but from different viewpoints is fused to obtain an image with higher resolution than the sensor normally provides or to recover the 3D representation of the scene. The multi temporal approach recognizes two different aims. Images of the same scene are acquired at different times either to find and evaluate changes in the scene or to obtain a less degraded image of the scene. The former aim is common in medical imaging, especially in change detection of organs and tumors, and in remote sensing for monitoring land or forest exploitation. The acquisition period is usually months or years. The latter aim requires the different measurements to be much closer to each other, typically in the scale of seconds, and possibly under different conditions.

    Fig 1: Application of Image fusion in Medical imaging

    Categories of fusion:

    Image fusion methods are mainly categorized into pixel (low), feature (mid), or Symbol (high) level.

    1. Pixel level image fusion represents fusion at the lowest level, where a number of raw input image signals are combined to produce a single fused image signal.

    2. Feature level image fusion, fuses feature and object labels and property descriptor information that have already been extracted from individual input images.

    3. Symbol level image fusion represents fusion of probabilistic decision information obtained by local decision makers operating on the results of feature level processing on image data produced from individual sensors.

    Pixel-level techniques that work in the spatial domain have gained significant interest mainly due to their simplicity and linearity. Multi resolution analysis is another popular approach for pixel level image fusion, using filters with increasing spatial level in order to produce a pyramid sequence of images at different resolutions. In most of these approaches, at each position of the transformed image, the value in the pyramid that corresponds to the highest saliency is used. Finally, an inverse transformation of the composite image is employed in order to derive the fused image.

    Real-time implementation of image fusion systems is very demanding, since it employs algorithms with a relatively high runtime complexity.

    Image fusion methods

    High pass filtering technique, IHS transform based image fusion, PCA based image fusion, Wavelet transform image fusion, pair-wise spatial frequency matching, Cholesky decomposition technique.

    In this paper, we are going through the detailed study on Cholesky decomposition technique. This technique not only provides effective solutions to the image fusion problem but also ensures faster implementations. This solution is therefore successfully used in a plethora of computer vision applications

  2. IMAGE FUSION METHOD BACKGROUND

    In this section, the necessary background for a vector representation of multidimensional remotely sensed data is provided. The statistical properties of a multispectral data set with M ·N pixels per channel and K different channels can be explored if each pixel is described by a vector whose components are the individual spectral responses to each multispectral channel

    X = [X1,X2, . . . , XK]T (1)

    with a mean vector given by

    While the mean vector is used to define the average or expected position of the pixels in the vector space, the covariance matrix describes their scatter

    (2)

    The covariance matrix can be used to quantify the correlation between the multispectral bands. In the case of a high degree of correlation, the corresponding off-diagonal elements in the covariance matrix will be large. The correlation between the different multispectral components can also be described by means of the correlation coefficient. The correlation coefficient r is related to the corresponding covariance matrix element, since it is the covariance matrix element divided by the standard deviation of the corresponding multispectral component (rij = cij/ij). The correlation coefficient matrix Rx has as elements the correlation coefficient between the ith and jth multispectral

    components. Accordingly, all the diagonal elements will be one, and the matrix is symmetric.

    Several different linear transforms can be found, based on the statistical properties of vector representation. An important case is the KarhunenLoeve transform, also known as principal component analysis (PCA). For this transformation, the matrix Cx is real and symmetric, thereby finding that a set of orthonormal eigenvalues is always possible. If the three principal components are used to establish a redgreenblue (RGB) image (the first component as red, the second as green, and the third as blue), the result is not optimal for the human visual system. The first principal component (red) will exhibit a high degree of contrast, the second (green) will display only limited available brightness value, and the third one (blue) will demonstrate an even smaller range. In addition, the three compoents displayed as R, G, and B are totally uncorrelated, and this is an assumption that does not hold for natural images. Therefore, a color image having as RGB channels the first three principal components resulted by the PCA transformation of the source multispectral channels possesses, most of the times, unnatural correlation properties as opposed to natural color images.

  3. PROPOSED METHOD

    Fig 2: Information flow diagram in Image fusion scheme employing Cholesky decomposition technique

    The information flow diagram of Cholesky decomposition technique based image fusion algorithm is shown in Fig. 3. A different approach to an RGB image forming from multispectral data is not to totally de-correlate the data, but to control the correlation between the final color components of the fused image. The control of the correlation among the final color components is carried out using the covariance matrix. The proposed transformation distributes the energy of the source multispectral channels, so that the correlation between the RGB components of the fused image is similar to that of natural color images. In this way no additional transformation is needed and direct representation to any RGB display can be applied. This can be done using a linear transformation of the form

    y=ATx. (5)

    where x and y are the population vectors of the source and the final images correspondingly. Then the relation between the covariance matrices is

    Cy=ATCxA. (6)

    Where Cx is the covariance of the vector population x and Cy is the covariance of the arising vector population after the linear transformation. The required values of the elements in the resulting covariance matrix Cy is based on the study of natural color images. The selection of a covariance matrix based on the statistical properties of natural color images guarantees that the resulting fused color image will be perceptually meaningful.

    The proposed transformation is based on the selection of the two matrices Cx and Cy , where Cx is the covariance of the original multispectral image set and Cy is the covariance matrix for the resulting fused image. It must be mentioned that Cx and Cy will be of the same dimension. The desired covariance matrix with dimension 3×3 derived from the study of natural color images will be used in the upper left part of Cy . The other elements will be zero except those of the main diagonal that will be equal to1. If the matrices Cx and Cy are known, the transformation matrix A can be evaluated using the Cholesky factorization method. Accordingly, a positive definite matrix S can be decomposed using an upper triangular matrix Q, that is

    S=QT.Q .. (7)

    Thus the matrices Cx , Cy using the above factorization can be written as

    .. (8)

    and (6) becomes

    Fig 3: IKONOS data set. (a) Natural color composite of the first three bands, (b) the corresponding NIR channel, and (c) the fusion result.

    Fig 4: Night vision data set. (a) Natural color scene, (b) the corresponding infrared image, and (c) the fusion result

    thus

    . (10)

    .. (9)

    1. CONCLUSION

      The Colour fusion method is configurable since it allows the user to control the correlation properties of the final fused color image. The hardware realization which is based on

      and the transformation matrix is

      . (11)

      The final form of the transformation matrix A implies that the proposed transformation depends on the statistical properties of the original multispectral data set. In addition, in the design of the transformation the statistical properties of natural color images are taken into account. The resulting population vector y is of the same order as the original population vector x but only the first three components of y can be used for color representation.

      IV. RESULT

      The Cholesky decomposition algorithm is verified in the MATLAB software. Images from different sensors are taken and fusion algorithm is applied. The resultant image is obtained which is more informative than the input images. Two examples are shown below

      FPGA technology provides a fast, compact, and low-power solution for image fusion. The developed algorithm can be used in various applications. Future work in this field is planned for extension to other types of image modalities and to objectively evaluate image fusion methods in real time.

    2. ACKNOWLEDGMENT

      I would like to thank Prof.J.Rajashekar Head of the Department, Instrumentation Technology, DSCE, Bangalore, for his cheerful encouragement. I wish to express my heartfelt thanks to my Project guide Prof. Ebenezer.V, DSCE, Bangalore, for his valuable suggestions.

    3. REFERENCES

  1. Dimitrios Besiris, Vassilis Tsagaris, Member, IEEE, Nikolaos Fragoulis, Member, IEEE, and Christos Theoharatos, Member, IEEE An FPGA-Based Hardware Implementation of Configurable Pixel-Level Color Image Fusion

  2. A. Goshtaby and S. Nikolov, Image fusion: Advances in the state of the art, Inf. Fusion, vol. 8, no. 2, pp. 114118, Apr. 2007. T. Stathaki, Image Fusion: Algorithms and Applications. New York: Academic, 2008.

  3. R. S. Blum and Z. Liu, Eds., Multi-Sensor Image Fusion and Its Applications (Special Series on Signal Processing and Communications). New York: Taylor & Francis, 2006.

  4. C. Pohl and J. L. van Genderen, Multisensor image fusion in remote sensing: Concepts, methods and applications, Int. J. Remote Sens., vol. 19, no. 5, pp. 823854, 1998.

  5. V. Tsagaris, V. Anastassopoulos, and G. Lampropoulos, Fusion of hyperspectral data using segmented PCT for enhanced color representation, IEEE Trans. Geosci. Remote Sens., vol. 43, no. 10, pp. 23652375, Oct. 2005.

  6. V. Tsagaris and V. Anastassopoulos, Multispectral image fusion for improved RGB representation based on perceptual attributes, Int. J. Remote Sens., vol. 26, no. 15, pp. 32413254, Aug. 2005.

  7. N. Jacobson, M. Gupta, and J. Cole, Linear fusion of image sets for display, IEEE Trans. Geosci. Remote Sens., vol. 45, no. 10, pp. 3277 3288, Oct. 2007.

  8. V.P.S. Naidu and J.R. Raol National Aerospace Laboratories, Bangalore-160 017 Pixel-level Image Fusion using Wavelets and Principal Component Analysis

  9. C. Thomas, T. Ranchin, L. Wald, and J. Chanussot, Synthesis of multispectral images to high spatial resolution: A critical review of fusion methods based on remote sensing physics, IEEE Trans. Geosci. Remote Sens., vol. 46, no. 5, pp. 1301 1312, May 2008.

  10. J. Tyo, A. Konsolakis, D. Diersen, and R. C. Olsen, Principal components- based display strategy for spectral imagery, IEEE Trans. Geosci. Remote Sens., vol. 41, no. 3, pp. 708718, Mar. 2003

Leave a Reply