Multi Sensor medical image fusion techniques and performance analysis

DOI : 10.17577/IJERTV2IS2300

Download Full-Text PDF Cite this Publication

Text Only Version

Multi Sensor medical image fusion techniques and performance analysis

J. Narendra Babu, ME(PhD), Associate Professor, SIETK, Puttur, Dr. S. A. K. Jilani, PhD, Professor, MITS, Madanapalli, A.P

Abstract

The fusion of images is the process of combining two or more images in to a single image retaining important features from each. Fusion is an important technique within many different fields, such as remote sensing, robotics and medical applications. The Laplacian pyramid was first introduced as a

The difference or error image has low variance and entropy, the low pass filtered image may be represented at reduced sample density. Iteration of the process at appropriately expanded scales generates a pyramid data structure. Let go(ij) be the original image and g(ij) be the result of applying an appropriate low pass filter to go . The prediction error Lo(ij) is given by

L (ij)=g (ij)-g (ij)

model for binocular fusion in human stereovision. It is well

suited for many image analysis tasks as well as for image compression. Discrete Wavelet based fusion techniques have

o

The reduced image g g and a second error i

o 1

1 is itself lowpass filtered to yield ge is obtained L (ij)= g (ij) g (ij).

been reasonably effective in combining perceptually important 2

ma 2 1 2

image features. We made an attempt to analyze these two techniques by applying to medical images ( CT and MRI) and calculating the statistical parameters mean,standerd deviation and, entropy as quantitative measures .From the experimental results it is found that Laplacian pyramid is better than wavelet (DWT) in one case and DWT is better than Laplacian pyramid in another case.

I.INTRODUCTION

Image fusion has wide applications in medical imaging, remote sensing, night time operation and multispectral imaging, together with image registration techniques more attention have been paid on image fusion especially in medical image processing domain such as computer aided diagnosis with multimodality image, image guided therapy and surgery[1]. Image fusion combines multiple-source complementary imagery in order to enhance the information apparent in the respective source images, as well as to increase the reliability of interpretation and classification.[2] two common methods are the wavelet transform and various pyramids. As with any pyramid method, the wavelet based fusion method is a multistate analysis method. The studies[3] showed that the DWT method have more complete theoretical support as well as further development potential. This paper will discuss DWT and Laplacian pyramid methods and compare their statistical parameters. To measure the image quality, quantitative evaluation of the fused imagery is considered quite important such that performance comparisons of the respective fusion algorithms can be carried out objectively and automatically .A quantitative metric may be potentially be used as feedback to the fusion algorithm to further improve the fused image quality. In this work the quality assessment was done based on their statistical parameters mean, standard deviation and entropy.

  1. THE LAPLASIAN PYRAMID

    Pixel to pixel correlation are first removed by subtracting a lowpass filtered copy of the image from the image itself [4].

    By these steps we obtain a two dimensional arrays L0,L1,L2,..Ln. If we now imagine these arrays stacked one above another, the result is a tapering pyramid data structure. The value at each node at the pyramid represents the difference between two Gaussian like or related functions convolved with the original image. The difference between these two functions is similar to the Laplacian operators. The value at each node in the Laplacian pyramid is the difference between the convolution of two equivalent weighting functions with the original image. Again this is similar to convolving an appropriately scaled Laplacian weighting function with the image .The node value can be obtained directly by applying this operator .The fusion can be performed with more than two input images, this paper is constricted to the case of two input images ( CT and MRI) . The pyramid decompositions are performed on each source image, all these decomposition are integrated to form a composite representation and finally reconstructed the fused image by performing an inverse pyramid transform.

  2. THE WAVELET

    When images are merged in wavelet space, we can process different frequency ranges differently. High frequency information from one image can be combined with lower frequency information from another [14]. By using wavelet based image fusion we can combine information from different sensors .It is useful in extracting one sensor with edge details from another.

    wavelets are finite duration oscillatory functions[5] with zero average value. The irregularity and good localization properties make them better basis for analysis of signals with discontinuities. The wavelet function can be represented as

    a,b(t) ( ), (a,bR),a>0

    Where a is the scale parameter and b the translation parameter the function (t) undergoes translation and scaling operations to give self similar wavelet families. Practical

    implementation of wavelet transforms requires discretization of its translation and scale parameters by taking

    a= ,b=m b0 j,mz.

    j,m (t)= ) j,m

    If discretization is as a0=2 and b0=1it is called a standard DWT. Wavelet transformation involves filtering andNyquist sampling.

    several methods[6] were proposed for various applications utilizing the directionality, orthogonality and compactness of wavelets. Fusion process should conserve all the important analysis information in the image and should not introduce any artifacts or inconsistencies while suppressing the undesirable characteristics like noise and other irrelevant details.

    The DWT[7] is first performed on the rows and then columns of the data by separately filtering and down sampling. Then in the direction of both the diagonals .This result in four sets of detail coefficients. These four sub images corresponds to the outputs of low-low (L.L), low high (L.H), high low (H.L) and high high (H.H) bands. By recursively applying the same scheme to the LL subband a multi resolution decomposition with a desired level can then be achieved. Therefore a DWT with K-decomposition levels will have m=3*K+1 such frequency bands. Generally image fusion based on DWT is to perform a multiresolution decomposition on each source image, the coefficients are then performed with a certain fusion rule.

  3. THE FUSION RULES

    The basic idea of all multiresolution fusion schemes[8] is motivated by the human visual system being primarily be sensitive to local contrast changes, e.g. the edges or corners. In the case of image fusion all respective coefficients from the input images are combined using the fusion rule. [9]Three previously developed fusion rules are

    1. Maximum selection (MS) scheme

      This simple scheme just picks the coefficients in each subband with the largest magnitude.

    2. Weighted average (WA) scheme

      This scheme uses a normalized correlation between the two images`subbands over a small local area. The resultant coefficients for reconstruction is calculated from this measure via a weighted average of the two images` coefficients.

    3. Window based verification (WBV) scheme

      This scheme creates a binary decision map to choose between each pair of coefficients using a majority filter. since wavelet coefficients[10] having large absolutevalues contain the information about the salient features of the images such as edges and lines, a good fusion rule is to take the maximum of the corresponding wavelet coefficients i.e just pick the coefficients with larger activity level and discard other. We

      used in this work MS scheme since they are simple and appear more frequently in literature.

  4. PERFORMENCE ANALYSIS

    An ideal image fusion process[11] should preserve all useful patterns from the source images and meanwhile minimize artifacts that could interfere with subsequent analysis.

    [12] Given that it is nearly impossible to fuse images without introducing some form of distortion, some measurements are necessary to measure the fused image quality. Some image metrics are normalized mutual information, Image quality index.[13] An important approach for describing a region is to quantify its texture content. An approach used frequently for texture analysis is based on statistical properties of the intensity histogram. One class of such measures is based on statistical moments of intensity values. In our work we used this approach for the texture evaluation of fused image.

    5.1. Mean

    It is a measure of average intensity,

    m= p(Zi )

    5.2 Standard deviation

    It is a measure of average contrast and it is the nth moment about the mean

    µn=

    5.3. Entropy

    It is a measure of randomness,

    e= Z is a random variable indicating intensity,

    P(Zi) is the histogram of the intensity levels in a region,

    L is the number of possible intensity levels. All these statistical parameters can be calculated from the histogram of the image.

  5. RESULTS

    Two image pairs each contains one CT and one MRI image were fused. By casual inspection we observed that the fused images by Laplacian pyramid method offered better quality than wavelet method as shown in fig. 1c.In another case Wavelet method offered better quality than Laplacian pyramid method as shown in fig. 2d. For quantitative evaluation there are several metrices as discussed in section. But we need to select one metric to measure the image quality across two different methods to make a proper and fair comparison. The metric entropy is selected, because it measure the randomness of the image. Although they rely on a particular application.

    The experimental values are shown in tables1and 2. The histograms of the fused images are also shown in figures e and f.

    (a) (b)

    (c) (d)

    (e) (f)

    Figure 1: (a) CT image,(b) MRI image, (C)Result of the fusion by Laplasian pyramid, (d) Result of the fusion by DWT,(e)histogram of c,(f)histogram of d.

    TABLE-1

    Type of fusion

    Mean

    Standard deviation

    Entropy

    Laplasian Pyramid

    64.9630

    58.2826

    7.2018

    Wavelet

    68.2223

    59.0476

    7.2845

    Statistical parameters of the fused images

    (a) (b)

    (c) (d)

    (e) (f)

    Figure 2: (a) CT image,(b) MRI image,(C)Result of the fusion by Laplasian pyramid,(d) Result of the fusion by DWT,(e)histogram of c,(f)histogram of d.

    TABLE-2

    Statical parameters of fused images

    Type of fusion

    Mean

    Standard deviation

    Entropy

    Laplasian Pyramid

    220.3953

    38.2669

    6.5250

    Wavelet

    227.5835

    28.1148

    6.2254

  6. REFERENCES

  1. R. H. Rogers and L. Wood, The history and status of merging multiple sensor data: an overview, in Technical Papers 1990, ACSMASPRS Annual Conv. Image Processing and Remote Sensing 4, pp. 352360 (1990).

  2. Y. Zheng, E. A. Essock, and B. C. Hansen, An advanced DWT fusion algorithm and its optimization by using the metric of image quality index, Optical Engineering, 44(3),037003-1- 12 (2005).

  3. P. J. Burt and E. Adelson, TheLaplacian pyramid as a compact image code, IEEE Trans. Commun. Com-31(4), 532 540 (1983).

  4. A Novel Architecture for Wavelet based Image FusionWorld Academy of Science, Engineering and Technology 57 2009 SusmithaVekkot, and PanchamShukla

  5. O. Rockinger, Pixel-level fusion of image sequences using wavelet frames, Proc. of the 16th Leeds Applied Shape Research Workshop, Leeds University Press, 1996, 149-154.

  6. Performing Wavelet Based Image Fusion through Different Integration Schemes Yong Yang International Journal of Digital Content Technology and its Applications. Volume 5,Number 3, March 2011

[7]. Image Fusion using ComplexWaveletsPaul Hill,NishanCanagarajah and Dave Bull The University of Bristol Bristol, BMVC2002,pp 487-496

  1. T.A. Wilson, S.K. Rogers, and L.R. Myers. Perceptual ased hyperspectral image fusion using multiresolution nalysis.Otical Engineering, 34(11):31543164, 1995.

  2. H. Li, B.S. Manjunath, and S.K. Mitra. Multisensor image fusion using the wavelet transform. Graphical Models and Image Processing, 57:235 245, 1995.

  3. Multi-scale Fusion Algorithm Comparisons: Pyramid, DWT and Iterative DWT YufengZheng ,12t International Conference on Information Fusion Seattle, WA, USA, July 6-9, 2009

  4. P. J. Burt and E. H. Adelson, Merging images throug pattern decomposition, Proc. SPIE 575, 173182 (1985).

  5. Digital image processing using MATLAB-2e,Rafael C Gongalez,RichardE.Woods,StevenL.Eddins,McGraw Hill.

  6. M. Eskicioglu and P. S. Fisher, Image quality measure and their performance, IEEE Trans. Commun. 43(12), 2959 2965 (1995).

Leave a Reply