Medical Image Fusion Using Non-Subsampled Contourlet Transform

DOI : 10.17577/IJERTV3IS031161

Download Full-Text PDF Cite this Publication

Text Only Version

Medical Image Fusion Using Non-Subsampled Contourlet Transform

M. Nazrudeen1

1Department of Computer Science and Engineering,

P. A. College of Engineering and Technology,

Coimbatore, Tamil Nadu, India.

Mrs. M. Rajalakshmi2

2Assistant Professor,

Department of Computer Science and Engineering,

P. A. College of Engineering and Technology,

Coimbatore, Tamil Nadu, India.

Mr. S. Sureshkumar3

3Assistant Professor,

Department of Computer Science and Engineering,

  1. A. College of Engineering and Technology,

    Coimbatore, Tamil Nadu, India.

    AbstractImage Fusion is a technique used in various application areas. Fusion rule is applied in multimodal medical images based on Non-Subsampled Contourlet Transform (NSCT), it is very useful in the medical application areas. The source medical images are first transformed by NSCT, where it decompose given source medical image into low frequency and high frequency components. Two different types of fusion rules are implemented based on phase congruency method and directive contrast technique which is used for fusing the low frequency coefficients image and high frequency coefficients image. Finally fused image is obtained by the inverse NSCT process with all composite coefficients. The proposed fusion framework provides a better way analysis multimodality images. The proposed framework is carried out by the three clinical examples of persons affected with Alzheimer, Stroke and Recurrent Tumor. With the help of MATLAB source medical images are decomposed and fusion process is applied. The inputs are taken from CT and MRI data set and the results are given in the form of MATLAB simulation.

    KeywordsMultimodal medical image fusion; non-subsampled contourlet transform; phase congruency; directive contrast.

    1. INTRODUCTION

      In todays advanced world, medical field have grown a tremendous growth. Many new inventions are implemented in order to diagnosis the patients disease in very fast and accurate manner. One such advance technique is Image Fusion process of different medical images. The objective of image fusion is to merge the quality and useful information from different medical images into one fused image which results in more accurate and clear information, it combines the relevant information from a set of images of the same scene into a single image, resultant fused image will be more complete and informative than that of input images [1]. A fusion of multimodal images can be very useful for clinical applications such as diagnosis and treatment planning. The input images taken for Fusion process are at different resolutions and intensity values. This different technique helps physicians to extract the features that may not be normally visible in a single image by different modalities. There are different types of medical images. Some of the examples of medical images are CT, MRI, PET and SPECT images. A CT image is a type of X- ray technology used for broken bones, blood clots, tumors,

      blockages and heart disease [2]. CT image provides better information about denser tissue and structure of tissue bone is better visualized by CT image. MRI image is a type of medical diagnostic imaging used to look at the blood vessels, brain, heart, spinal cord and other internal organs. MRI image provides better information on soft tissue and normal and pathological soft tissues are better visualized. The information provided by Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) is complementary. The composite image not only provides salient information from both images but also reveals the position of soft tissue with respect to the bone structure.

    2. DIFFERENT LEVELS OF FUSION

      Normally in image fusion three different levels of image fusion technique are followed. These include Pixel level image fusion, Feature level image fusion and Decision level image fusion [14]. This categorization is based according to merging stage. Feature level image fusion is extracting the feature from different images that are to be fused in order to get new image. Decision level image fusion contains compact data. It extracts of salient features which are depending on their environment such as pixel intensities, edges or textures. These similar features from input images are fused. Decision level image fusion is effective for complicated system which is not suitable for general applications. Decision level fusion consists of merging information at a higher level of abstraction it combines the results from multiple algorithms to yield a final fused decision [3].

      In Pixel based image fusion, the fusion process is performed on a pixel-by-pixel basis. It generates a fused image in which information associated with each pixel is determined from a set of pixels in source images to improve the performance of image processing tasks such as segmentation [15]. Pixel level image fusion is at lower level and contains detailed information. Most of the medical image fusion process employs Pixel level image fusion due to the advantage of easy implementation, original measured quantity and efficient computation.

      In this paper, a fusion technique is proposed for multimodal medical images based on Non-Subsampled

      Contourlet Transform (NSCT). The main idea is to perform NSCT on the medical images. The images are decomposed into low frequency coefficients image and high frequency coefficients image. Then fusion rule applied for low frequency coefficients and high frequency coefficients. The phase congruency method and directive contrast technique are efficient method for fusion of low frequency coefficients image and high frequency coefficients image [13]. The phase congruency technique provides a contrast and brightness invariant representation of low frequency coefficients. The directive contrast technique determines the frequency coefficients from the clear parts of the high frequency image. The combination of these two methods can preserve more details in source images and further improve the quality of fused image. The PSNR and RMSE value of proposed technique is better fusion outcome when compared to other conventional image fusion techniques.

      The rest of the paper is organized as follows. NSCT and phase congruency and directive contrast are described in Section II followed by the proposed multimodal medical image fusion framework in Section III. Experimental results and discussions are given in Section IV and the conclusion and future work are described in Section V.

    3. PROPOSED SYSTEM

      This section provides the description of concepts on which the proposed framework is based. These concepts include NSCT and Phase Congruency method and Directive Contrast Technique in NSCT Domain are described as follows.

      1. Non-Subsampled Contourlet Transform (NSCT)

        The NSCT is a fully multi-scale, multi-direction and shift invariant expansion of contourlet transform. It is a fast implementation process. It is a type of tool in order to provide a better representation of the contours [4]. NSCT has an important property of image decomposition. It achieves similar subband decomposition as similar to that of contourlets, but without downsamplers and upsamplers in it. Because of its redundancy, the filter design problem of the NSCT is much less constrained than that of contourlets. This enables us to design the filters with better frequency selectivity for achieving a better subband decomposition [5]. Using the mapping approach we provide a framework for filter design that ensures good frequency selectivity in addition to having a fat implementation through sequential steps [6]. The NSCT has proven to be very efficient in image noise removal and image enhancement.

        NSCT can be divided into two stages for decomposing the medical images. First stage is Non-Subsampled Pyramid (NSP) and Second, Non-Subsampled Directional Filter Bank (NSDFB) stage.

        1. Non- Subsampled Pyramid (NSP)

          The multiscale property of the NSCT is obtained from a shift-invariant filtering structure. It achieves subband decomposition similar to that of the Laplacian pyramid [9]. This is done by using two channel non-subsampled filter banks. The filters for subsequent stages are obtained by

          upsampling the filters. This gives the multiscale property without need of additional filters.

          The decomposition process is done by removing the downsamplers and upsamplers in the Laplacian pyramid and then upsampling the filters accordingly. The advantage of this method is that we obtain better low-pass and high-pass frequency co-efficients [8].

        2. Non-Subsampled Directional Filter Bank (NSDFB)

          The NSDFB is two-channel non-subsampled filter bank. A shift-invariant directional expansion is obtained with a Non Subsampled DFB (NSDFB) [9]. This filter bank is constructed by combining the directional fan filter banks and the process is done by eliminating the available downsamplers and upsamplers in the DFB. This is done by switching off the downsamplers and upsamplers in each two channel filter bank in the DFB tree model and upsample the filters respectively.

          This Filter bank decompose the images of high frequency images from NSP at each scale and produces directional sub- images with the same size as the source image [7]. Therefore, the NSDFB offers the NSCT with the multi-direction property and provides us with more precise directional detail information.

          Lowpass Subband

          Bandpass

          Image directional

          subbands

          Bandpass directional subbands

          Fig. 1. Non-Subsampled Contourlet Transform

      2. Phase Congruency

        Phase congruency is an edge detection method. It is particularly robust against changes in illumination and contrast of image. This approach is based on the Local Energy Model where important features can be found at different points of an image. The Fourier components are maximum in phase model. This technique is used for feature detection [10] and blur estimation.

        The algorithm of Phase Congruency is to provide a good feature localization and noise compensation. Initially, logarithmic Gabor filter banks are applied to the 2D image and the local amplitude. This method reflects the behavior of the image in frequency domain [11]. It has been noted that edge like features have many of their frequency components in the same phase. It corresponds to the edges in an image where sharpness gets changed between light and dark.

        Phase congruency was proposed as intensity and contrast invariant dimensionless measure of feature significance, and used for signal matching and feature extraction.

      3. Directive Contrast in NSCT Domain

      The directive contrast feature measures the difference of the intensity value at some pixel from the neighboring pixels. Commonly, the same intensity value looks like a different intensity value depending on intensity values of neighboring pixels [12]. Therefore, local contrast is developed and is defined as

      Step 2: Two different fusion rules are introduced for combining low frequency coefficients image and high frequency coefficients image.

      Fusion of Low Frequency Coefficients Image

      Step 3: In order to fuse low frequency coefficients image, the Phase Congruency method is applied.

      C L LB

      LB

      LH

      LB

      (1)

      The benefit of Phase Congruency is that it select and combines the contrast and brightness invariant

      Where L is the local luminance and LB is the luminance of the local background. In general, LB is regarded as local low frequency, L – LB= LH is treated as local high-frequency. This frequency is taken as the pixel value in multiresolution domain.

      The problem is that on considering single pixel it is insufficient to determine the other pixel values. Therefore, the directive contrast is integrated with the new technique called sum-modified-Laplacian to get more accurate important features.

      A_low

      A_low

      Fused Image F

      Fused Low Frequency

      Inverse NSCT

      Source Image A

      NSCT

      In general, the larger values of high-frequency coefficients correspond to the sharper brightness, edges, lines and region boundaries. The proper way to select high frequency coefficients is necessary to ensure better information interpretation. Hence, the sum-modified Laplacian is integrated with the directive contrast in NSCT domain to produce accurate salient features.

      B_high

      B_low

      Fused High Frequency

      Source Image B

      NSCT

      Fig. 2. Block diagram of multimodal medical image fusion

    4. PROPOSED MULTIMODAL MEDICAL IMAGE FUSION FRAMEWORK

      IMAGE FUSION USING NSCT ALGORITHM

      This paper proposes a new image fusion framework for multimodal medical images, which is based on the NSCT Transformation.

      The proposed algorithm for Fusion of medical images (CT and MRI) using NSCT Transformation is summarized as follows:

      Step 1: Using NSCT Transformation decompose original images into proper levels.

      representation contained in the low frequency coefficients.

      %% Fusion of Low sub band

      % % Phase Congruency [phaseCongruency_img1, orientation_img1] = phasecong(img1_NSCT_Low); figure,imshow(phaseCongruency_img1) [phaseCongruency_img2, orientation_img2] = phasecong(img2_NSCT_Low); figure,imshow(phaseCongruency_img2) img_NSCT_Low_fuse = [];

      for i = 1:size(phaseCongruency_img1,1) for j = 1:size(phaseCongruency_img1,2) if(phaseCongruency_img1(i,j) > phaseCongruency_img2(i,j))

      img_NSCT_Low_fuse(i,j) = img1_NSCT_Low(i,j); else if(phaseCongruency_img1(i,j) < phaseCongruency_img2(i,j))

      img_NSCT_Low_fuse(i,j) = img2_NSCT_Low(i,j); else img_NSCT_Low_fuse(i,j) = (img1_NSCT_Low(i,j)+img2_NSCT_Low(i,j)/2);

      end end end end

      figure, imshow(img_NSCT_Low_fuse,[])

      Fusion of High Frequency Coefficients Image

      Step 4: In order to fuse high frequency coefficients image, the Directive Contrast technique is used.

      The most prominent texture and edge information are selected from high-frequency coefficients and combined them to form a fused one.

      % Fusion of High Sub band

      for i= 1 :length(img1_NSCT{1,2})

      smlA{i} = SML(img1_NSCT_high{i}); end for i= 1 : length(img2_NSCT{1,2}) smlB{i} = SML(img2_NSCT_high{i}); end for i=1: length(img1_NSCT{1,2}) if(img1_NSCT_high{1}~=0)

      DirA{i} = smlA{i}/img1_NSCT_high{1}; else DirA{i} =smlA{i}; end end

      for i=1: length(img2_NSCT{1,2}) if(img2_NSCT_high{1}~=0)

      DirB{i} = smlB{i}/img2_NSCT_high{1}; else DirB{i} =smlB{i};

      end end

      Step 5: Finally we obtained a Fused Low-frequency image and Fused High-frequency image.

      Step 6: Inverse Transformation is taken in order to obtain Fused Medical Image.

      1. RESULTS

        Medical image. h) Result for Color representation of proposed Fused Medical image.

        The general requirement of an image fusing process is to preserve all valid and useful information from the source images, while at the same time it should not introduce any distortion in resultant fused image. Performance measures are used essential to measure the possible benefits of fusion and also used to compare results obtained with different algorithms.

        1. Peak Signal to Noise Ratio:

          (a)

          (b)

          PSNR is defined as ratio betwen the maximum possible power of a signal and power corresponding to noise the fidelity of its representation. The PSNR is use to calculate the similarity between two images. The PSNR between the reference image R and the fused image F. The PSNR value is calculated as

          PSNR 10log

          10

          ( MAX 2 )

          I

          RMSE

          (2)

          (c)

          (d)

        2. Root Mean Square Error (RMSE)

        Root Mean Square Error (RMSE) between the fused image and original image provides error as a percentage of mean intensity of the original error. The RMSE value is calculated as:

        RMSE 1 m1 n1[R(i, j) F(i, j)]2

        (3)

        mn i 0 j 0

        (e)

        (f)

        Where R(i, j) is the reference image and F(i, j) are the fused image of CT and MRI respectively, and m and n are image dimensions. Smaller the value of the RMSE, better the performance of the fusion algorithm.

        TABLE I

        COMPARISON OF PSNR AND RMSE VALUES FOR DIFFERENT MEDICAL IMAGE FUSION TECHNIQUES

        Different Fusion Methods

        Image Set

        (CT and MRI scan)

        RMSE

        PSNR

        Medical Image Fusion Based on Ripplet Transform

        0.1013

        30.45

        Multisensor Image Fusion using Wavelet Transform

        0.0756

        34.00

        Curvelet Fusion of MRI and CT images

        0.0516

        36.52

        Multifocus Image Fusion Based on Non- Subsampled Contourlet Transform

        0.0344

        38.09

        Multifocus Image Fusion Based on Non- Subsampled Contourlet Transform and Directive Contrast Method

        0.0211

        41.66

        (g)

        (h)

        Fig. 3. Results for Medical Source image: a) CT image. b) MRI image. c) Result for Phase Congruency method. d) Result for Directive Contrast technique. e) Result for fused low frequency coefficient image. f) Result for fused high frequency coefficient image. g) Result for proposed Fused

        0.12

        RMSE Value

        0.1

        0.08

        0.06

        0.04

        0.02

        0

        RMSE

        the qualitative analytical measurement of PSNR and RMSE Values.

        FUTURE WORK

        The simulation results show the superiority of the NSCT and future going to compare the result with the DTCWT transform. The higher the value of the PSNR and RMSE is the better performance of the fusion algorithm.

        REFERENCES

        Different Transform Methods

        PSNR Value

        Fig. 4. Comparison of RMSE Value With Various Image Fusion Techniques

        45

        40

        35

        30

        25

        20

        15

        10

        5

        0

        PSNR

        Different Transform Methods

        Fig. 5. Comparison of PSNR Value With of Various Image Fusion Techniques

      2. CONCLUSION

In this paper, an image fusion technique is proposed for multi-modal medical images of CT and MRI, which is based on Non-Subsampled Contourlet Transform, Phase congruency model and directive contrast technique. First medical images are decomposed into low frequency coefficients image and high frequency coefficients image using NSCT transformation. For fusion process two different rules are applied. First low frequency coefficient images are fused using phase congruency technique (low band fusion rule). Next the high coefficient images are fused using directive contrast technique (high band fusion rule). Finally the fused coefficients are reconstructed by performing the inverse process of DWT. By this more information can be preserved in the fused image with improved quality. The proposed method is advantage over the other classical fusion modal such as IHS and PCA, Brovey and Multiscale transform methods such as DWT and SF (spatial frequency), Stationary Wavelet transform. The proposed algorithm is evaluated with

  1. L. Yang, B. L. Guo, and W. Ni, Multimodality medical image fusion based on multiscale geometric analysis of contourlet transform, Neurocomputing, vol. 72, pp. 203211, 2008.

  2. T. Li and Y.Wang, Biological image fusion using a NSCT based variable-weight method, Inf. Fusion, vol. 12, no. 2, pp. 8592, 2011.

  3. F. E. Ali, I. M. El-Dokany, A. A. Saad, and F. E. Abd El-Samie,

    Curvelet fusion of MR and CT images, Progr. Electromagn. Res. C, vol. 3, pp. 215224, 2008.

  4. X. Qu, J. Yan, H. Xiao, and Z. Zhu, Image fusion algorithm based n spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain, Acta Automatica Sinica, vol. 34, no. 12, pp. 15081514, 2008.

  5. Q. Zhang and B. L. Guo, Multifocus image fusion using the nonsubsampled contourlet transform, Signal Process., vol. 89, no. 7, pp. 13341346, 2009.

  6. Y. Chai, H. Li, and X. Zhang, Multifocus image fusion based on features contrast of multiscale products in nonsubsampled contourlet transform domain, Optik, vol. 123, pp. 569581, 2012.

  7. S. Yang, M. Wang, Y. Lu, W. Qi, and L. Jiao, Fusion of multiparametric SAR images based on SW-nonsubsampled contourlet and PCNN, Signal Process., vol. 89, no. 12, pp. 2596 2608, 2009.

  8. Y. Chai, H. Li, and X. Zhang, Multifocus image fusion based on features contrast of multiscale products in nonsubsampled contourlet transform domain, OptikInt. J. Light Electron Opt., vol. 123, no. 7, pp. 569581, 2012.

  9. A. L. da Cunha, J. Zhou, and M. N. Do, The nonsubsampled contourlet transform: Theory, design, and applications, IEEE Trans.Image Process., vol. 15, no. 10, pp. 30893101, Oct. 2006.

  10. P. Kovesi, Image features from phase congruency, Videre: J. Comput. Vision Res., vol. 1, no. 3, pp. 226, 1999.

  11. P. Kovesi, Phase congruency: A low-level image invariant, Psychol. Res. Psychologische Forschung, vol. 64, no. 2, pp. 136 148, 2000.

  12. G. Bhatnagar and B. Raman, A new image fusion technique based on directive contrast, Electron. Lett. Comput. Vision Image Anal., vol. 8, no. 2, pp. 1838, 2009.

  13. Q.Guihong, Z. Dali, and Y. Pingfan, Medical image fusion by wavelet transform modulus maxima, Opt. Express, vol. 9, pp. 184190, 2001.

  14. V. Barra and J. Y. Boire, A general framework for the fusion of anatomical and functional medical images, Neuro Image, vol. 13, no. 3, pp. 410424, 2001.

  15. S. Das, M. Chowdhury, and M. K. Kundu, Medical image fusion based on ripplet transform type-I, Progr. Electromagn. Res. B, vol. 30, pp. 355370, 2011.

Leave a Reply