Wavelet Transform based Multi-Modality Medical Image Fusion

DOI : 10.17577/IJERTCONV3IS27100

Download Full-Text PDF Cite this Publication

Text Only Version

Wavelet Transform based Multi-Modality Medical Image Fusion

1Bhavana. V

Department of Computer Science & Engineering

R.V. College of Engineering Bengaluru, India

2Krishnappa H.K

Department of Computer Science & Engineering

    1. College of Engineering Bengaluru, India

      Abstract Medical imaging modalities such as magnetic resonance imaging (MRI), computerized tomography (CT), positron emission tomography (PET) etc have been developed and widely used for clinical diagnosis. In brain medical imaging, MR image provides high-resolution anatomical information in gray intensity, while PET image reveals the biochemical changes in color without anatomical information. These two types of images contain important information that a brain disease can be diagnosed accurately and effectively. Thus, fusing two different medical images into a single image with both anatomical structural and spectral information is highly desired. In this paper we present a PET and MR image fusion method based on wavelet transform. This method generates very good fusion result with reduced color distortion and without losing any anatomical information. We used three brain disease datasets for testing and comparison- normal axial, normal coronal, and Alzheimers disease.

      Keywords Multimodal medical images; wavelet transform; image fusion

      1. INTRODUCTION

        In medical field, doctors (radiologists) require high spatial and high spectral information in a single image for the purposes like researches, monitoring, accurate diseases diagnosing and for treatment process. This type of information cannot be obtained using single modality images since, Computed Tomography (CT) image which are most popular for showing bone structures and lacks in providing information about the tissues at the same time, Magnetic Resonance Imaging (MRI) image which provides soft tissues information and lacks in boundary information, Positron Emission Tomography (PET) image which provides clear information in blood flow but lacks in boundary information and so on. Thus, every single modality images has their own drawbacks in providing needed information because each image is captured with different radiation power. To resolve this, complementary information from multiple image modalities are required. In this situation, fusion is a technique which is used to combine multi-modality medical images such as CT, MRI, PET etc. Image fusion is a technique of combining appropriate information from two or more images into a single fused image where the resulting image can provide more inventive information than any of the input images [1]. These fused image information are used in many applications such as Aerial and Satellite imaging, Medical imaging, Robot vision, Multi-focus image fusion etc.

      2. LITERATURE REVIEW

        Image fusion is a process of combining multiple input images of the same scene into a single fused image, which contains important information and obtain the important features from each of the original images and makes it more suitable for human and machine perception. A novel region based image fusion method is explained by Tanish Zaveri et al. [1] which illustrated that region based image fusion algorithm performs better than pixel based fusion method.

        After studying the principles and characteristics of discrete wavelet framework, Yijian Pei et al. [2] explained an improved discrete wavelet framework based image fusion algorithm. The improvement is considered for the high frequency sub band image region characteristic. The wavelet transform based algorithm can obtain less noise than weighted average algorithm. The useful information of each source image is retrieved from the multi sensor, if the algorithm is synthesized effectively.

        Another image fusion algorithm was proposed by Patil et al. [3] using hierarchical PCA. In this paper, the authors described that image fusion is a process of integrating two or more images of the same scene to get the more informative image and also an image fusion algorithm was proposed by combining pyramid and PCA techniques and carryout the quality analysis of proposed fusion algorithm without reference image which can be used for feature extraction, dimension reduction and image fusion.

        A new algorithm was proposed by S. Daneshvar et al. [4], which integrate the advantages of both IHS and RIM fusion methods to improve the functional and spatial information content. Visual and statistical analyses show that the proposed algorithm significantly improves the fusion quality in terms of entropy, mutual information, discrepancy, and average gradient compared to the fusion methods including IHS, Brovey, discrete wavelet transform (DWT), a- trous wavelet and RIM.

        Two fusion methods were proposed by Phen-Lan Lin et al. [5], namely IHS&LG+ and IHS&LG++, based on IHS and log-Gabor wavelet for fusing PET and MRI images by choosing suitable decomposition scale and orientation for different regions of images in the first method, and refining the fused intensity of the first method to further reduce color

        distortion and enforce the anatomical structure in the second method. This methods use the hue angle of each pixel in PET image to divide both PET and MRI images into regions of high and low activity. The fused intensity of each region is obtained by inverse log-Gabor transforming of high frequency coefficients of MRI intensity and low frequency coefficients of PET intensity-component.

        A new approach for PET- MRI image fusion by using the wavelet and spatial frequency method was proposed by Maruturi Haribabu et al. [6]. In this method the influence of image imbalance is eliminated and blurred the phenomenon of fusing image, improved the clarity and provided more reference information for doctors and the experimental results illustrated that the performance of the proposed method is superior to the traditional algorithm based on PCA in terms of good visual & quantitative analysis fusion results.

      3. PROPOSED METHOD

        The system architecture of the proposed image fusion method is shown in Fig. 1.

        Fig. 1 The system architecture of the proposed method

        As shown in Fig.1, PET and MR images are taken as its input for preprocessing and enhancement. PET image is firstly decomposed into its Intensity Hue Saturation (IHS) transform and thus the information of high activity region is differentiated from the low activity region of PET image by making use of hue angle obtained from the IHS transform. Preprocessing stage involves removal of noise and enhancing the input images using Gaussian filter. Filtering is used mainly to smoothen or sharpen the input images. High activity and low activity regions of PET image carries more anatomical and spectral information, respectively. Hence both the high activity and low activity regions are decomposed by applying 4-level DWT transform to obtain high and low frequency coefficients. Then we combine high frequency coefficients of PET and MR images using averaging method and perform the inverse DWT to obtain the fused result for the fused high frequency output. Similarly by combining low frequency coefficients of PET and MR images into a complete set of wavelet coefficients and performing the inverse DWT, we can obtain the fused result for the low

        activity region. In order to get better structural information, Fuzzy c means clustering is used and to avoid color distortion, color patching is also done. Finally fused image is extracted and displayed with less color distortion and without losing any structural information.

      4. EXPERIMENTAL ESULTS

        Experimental dataset consists of PET and MR images for pre processing and fusion which are downloaded from the website http://www.med.harvard.edu/AANLIB/home.html. The images in Dataset-1, Dataset-2 and Datast-3 are normal axial, normal coronal and Alzheimers brain disease image, respectively. The PET and MR images for all the three datasets as well their fusion result using DWT based image fusion is shown in Fig. 2.

        Fig. 2 Three set of PET and MRI images and their corresponding fused results obtained by using the proposed method

      5. CONCLUSION

In this paper, we proposed a new fusion method for fusing PET and MR brain images based on wavelet transform with less color distortion and without losing any anatomical information. Our method is different from the regular simple DWT fusion method in that our method performs wavelet decomposition with four levels for low- and high-activity regions, respectively, in the PET and MR brain images. Experimental results demonstrated that our fused results for normal axial, normal coronal and Alzheimers disease brain images have less color distortion and richer anatomical structural information than those obtained from the other existing fusion techniques.

REFERENCES

  1. Tanish Zaveri, Mukesh Zaveri, A Novel Region Based Multifocus Image Fusion Method, International Conference on Digital Image Processing, 978-0-7695-3565-4/09, 2009 IEEE DOI 10.1109/ICDIP.2009.27.

  2. Yijian Pei, Jiang Yu, Huayu Zhou, Guanghui Cai, The Improved Wavelet Transform Based Image Fusion Algorithm and the Quality Assessment, 2010 3rd International Congress on Image and Signal Processing (CISP2010), 978-1-424

    4-6516-3/10,2010IEEE.

  3. Patil, Ujwala and Uma Mudengudi. "Image fusion using hierarchical PCA." In image Information Processing (ICIIP), 2011 International Conference on, pp. 1-6. IEEE, 2011.

[4]S.Daneshvar and H.Ghassemian, MRI and PET image fusion by combining IHS and retina-inspired models, In Information Fusion, vol.11, pp.114-123, 2010.

  1. Phen-Lan Lin, Po- Whei Huang, Cheng- I Chen, Tsung-Ta Tsai and Chin- han Chan, Brain Medical Image Fusion Based On HIS And LOG- GABOR with suitable decomposition scale and orientation for different

    regions, ADIS International Conference Computer Graphics, Visualization, Computer Vision and Image Processing 2011, ISBN: 978-972-8919-48-9,2011 IADIS.

  2. Maruturi Haribabu, CH Hima Bindu, Dr. K. Satya Prasad, Multimodal Image Fusion of MRI-PET Using Wavelet Transform, 2012 IEEE International Conference on Advances in Mobile Network, Communications and its Applications.

Leave a Reply