Medical Image Fusion Technique Using Singlelevel and Multilevel DWT

DOI : 10.17577/IJERTV3IS10264

Download Full-Text PDF Cite this Publication

Text Only Version

Medical Image Fusion Technique Using Singlelevel and Multilevel DWT

Jigar R. Patel, Jwolin M. Patel

EC Department, Parul Institute of Technology PIT-Limda, Vadodara, India

Abstract: In the field of medicine to evaluate or to examine the inner body parts different radiometric scanning techniques can be used. Some most commonly used scanning techniques include the computerized tomography scan (CT scan) and magnetic resonance imaging scan (MRI scan) but the images of various body parts taken by using these scanning techniques are having their own merits and demerits. MRI scans can show the images of soft tissues very clearly but it cannot show the images of bones and hard tissues clearly. the idea of combining images from different modalities become very important and medical image fusion has emerged as a new promising research field.

Image fusion involves merging two or more images in such a way as to retain the most desirable characteristics of input image in resultant output Image. By observing medical fusion image, doctor could easily conform the position of illness.

In this paper dwt and three level dwt techniques have been studied and the simulation carried out using matlab and the results from a number of image quality metrics schemes are to be compared.

Keywords-image fusion ;discrete wavelet transform; computerized tomography scan; magnetic resonance imaging scan; image quality metrics

or processed without significantly reducing the amount of relevant information.

Some benefits[1] of image fusion include:

  • Image overlay for displays

  • Image sharpening for operator clarity

  • Image enhancement through noise reduction

  • Image mosaicking for enhanced spatial coverage

  • Image registration for reference to world coordinates

  • Enhanced clarity through feature amplification

  • Segmentation through regional selections

  • 3D estimation for scene calibration

  • Image identification for tracking

    1. IMAGE FUSION ALGORITHOM

      1. INTRODUCTION

        Image fusion is the process of combining information from two or more images of a scene into a single composite image that is more informative and is more suitable for visual perception or computer processing. The objective in image fusion is to reduce uncertainty and minimize redundancy in the output while maximizing relevant information particular to an application or task. Given the same set of input images, different fused images may be created depending on the specific application and what is considered relevant information. There are several benefits in using image fusion: decreased uncertainty, wider spatial and temporal coverage, increased robustness, and improved reliability of system performance.

        Steps:

        Figure 1. Basic image fusion algorithm

        Often a single sensor cannot produce a complete representation of a scene. Visible images provide spectral and spatial details, and if a target has the same color and spatial characteristics as its background, it cannot be distinguished from the background. If visible images are fused with thermal images, a target that is warmer or colder than its background can be easily identified, even when its color and spatial details are similar to those of its background. Fused images can provide information that sometimes cannot be observed in the individual input images. Successful image fusion significantly reduces the amount of data to be viewed

        1. Take input images of same size and of same scene or object taken from different sensors like visible and infra red images or images having different focus.

        2. If the input images are colour, separate their RGB planes to perform 2D transforms.

        3. Apply one of the different image fusion technique.

        4. Fuse the input image components by taking any of the pixel merging technique.

        5. Resulting fused transform components are converted to image using inverse transform.

      2. DISCRET WAVELET TRANSFORM

        To understand the basic idea of the DWT let us focus on one dimensional signal. The signal is passed through a low pass filter and a high pass filter so as to get both high and low frequency parts of the signal. High frequency part contains edge components wherein low frequency part contains information components. The same process is repeated for the low frequency part so as to get second level low and high frequency components. This process is continued until the signal has been entirely decomposed or stopped before by the application at hand. For compression and watermarking applications, generally no more than five decomposition steps are computed. Furthermore, from the DWT coefficients, the original signal can be reconstructed. The reconstruction process (synthesis) is called the inverse DWT (IDWT).

        Fi gur e 2. Fi ltering or dec omposition process at basic level [8 ]

        Any signal contains its most important and informative part in its low-frequency component and that is the reason why low frequency components are very important. The high- frequency content, on the other hand, imparts flavour or nuance. Consider the human voice. If high frequency components are removed from a song it would sound different, but one can still identify the saying. However, if low-frequency components are removed, one would be able to hear garbage only.

        In wavelet analysis two words are frequent i.e. approximations and details. The approximations are the high- scale, low-frequency components of the signal. The details are the low-scale, high-frequency components. The first stage of the decomposition wherein signal is applied to low pass and high pass filters. If the original signal is of size 256×256 then size of each of the detail and approximation component would be 256×256. So the output contains twice the samples compare to input. So output of both of the filter is down sampled by 2 so that each of the output would have half the size of the original signal and hence the total size equals to that of the original signal. Figure shows the concept. The decomposition or analysis process with down sampling produces DWT coefficients.

        Fi gur e 3 . Analysis with down sampling

        The DWT and IDWT for a two dimensional(2d) image k(m,n) can be similarly defined by implementing the one dimensional(1d) DWT and IDWT for each dimension m and n separately, resulting in the pyramidal representation of an image. This kind of two-dimensional(2d) DWT leads to a decomposition of approximation coefficients at level j in four components: the approximation at level j +1, and the details in three orientations (horizontal, vertical, and diagonal).

        Fi gur e 4 . Basic decomposition steps f or images

        1. DWT ALGORITHOM

          Basic flow of image fusion can be divide into two part,decomposition and reconstruction.

          Figure 5. Image fusion algorithm using DWT

          Figure 6. Reconstruction Process using DWT

      3. MULTILEVEL DISCRET WAVELET TRANSFORM[8]

        The successive approximation components can be iteratively decomposed so that one signal can be divided into many components having lower resolution. This is said to be the wavelet decomposition tree and it is shown in figure 7.

        Figure 7. Multilevel Decomposition

        For a two dimensional image F (x,y), the forward and reverse decomposition can be done by applying DWT and IDWT first on dimension x and then the same process can be performed for the other dimension y. This results in the representation of the image which is pyramidal in nature. This kind of 2D DWT decomposes the image into four parts namely, approximation component, Horizontal Detail component, Vertical Detail Components and Diagonal Detail Components.

        Figure 8. Decomposition Detail

        Figure 9. Basic Decomposition Steps for images

        Since image is 2-D signal, we will mainly focus on the 2-D wavelet transforms. The following figures show the structures of 2-D DWT with 3 decomposition levels:

        Figure 10. Pyramid Hierarchy of 2-D DWT

        After one level of decomposition, there will be four frequency bands, namely Low-Low (LL),Low-High (LH),High-Low (HL) and High-High (HH). The next level decomposition is just applied to the LL band of the current decomposition stage, which forms a recursive decomposition procedure

      4. IMAGE QUALITY MATRIX[10]

        Image quality assessment plays an very important role in medical applications. Image quality metrics are used to benchmark different image processing algorithm by comparing the objective metrics.

        There are two types of metrics that is subjective and objective used to evaluate image quality. In subjective metric users rate the images based on the effect of degradation and it vary from user to user whereas objective quality metrics quantify the difference in the image due to processing technique and level of process(single or multi). The same dimension of image data is set for convenience in the fusion process and

        post processing analysis. Before fusing the images they were registered. After registering, the fusion approaches- simple averaging, Principal component Analysis and wavelet based fusion at four different levels and Radon based fusion are used to create the fused images.

        Assessment of image fusion performance can be first divided into two categories: one with and one without reference images. In reference-based assessment, a fused image is evaluated against the reference image which serves as a ground truth. Furthermore, image fusion assessment can be classified as either qualitative or quantitative in nature. In practical applications, however, neither qualitative nor quantitative assessment alone will satisfy the needs perfectly. Given the nature of complexity of specific applications, a new assessment paradigm combing both qualitative and quantitative assessment will be most appropriate in order to achieve the best assessment result.

      5. IMAGE FUSION RESULT

        Figure11. MRI Image of Brain[5]

        Figure 12. CT Image of Brain[5]

        Figure 11and Figure 12 represents the MRI and CT images of Brain of same person respectively. In the MRI image the inner contour missing but it provides better information on soft tissue. In the CT image it provides the best information on denser tissue with less distortion, but it misses the soft tissue information.

        Figure13. Single level Fused Image of Brain

        The figure 13 image is the result of Single level DWT technique which is by combining of MRI and CT images .The Single level DWT fused image have information of both images but have less approximate detail.

        high in three level DWT compare to single level DWT. With this the three level DWT is better method.

      6. CONCLUSION

        Image Fusion of CT and MRI images and after simulation in Mat lab, Result show that, In multilevel DWT the correlation value is 33% higher than single level DWT at the cost of low values of PSNR and high value of MSE and other parameters.

      7. FUTURE WORK

In Future , The two method namely, Discrete Wavelet Transform and Stationary Wavelet Transform will compared, who's result is best that combine with Principle Component Analysis method such a manner that the visual quality remains good after image fused. Resultant fused image can be removed spatial and spectral degradation and also remove ringing/aliasing effect So that doctor could easily conform the position of illness.

Figure14. Three level DWT Fused Image of Brain

The figure 14 image is the result of Three level DWT fusion technique. When compare three level DWT with Single level DWT it shows soft tissues information which are not shown in above figure i.e at the left and right side of the inner part. But in three level DWT ringing and aliasing effect occurs at outer edges.

Table 1: Performance Parameter Value for DWT and multilevel DWT[4]

Method

Parameter

Single level

DWT

Three level

DWT

Co-relation

0.6651

0.9941

Visibility

Difference

17.9542

17.6714

PSNR

16.4997

15.6357

SNR

1.3106

0.4465

MSE

31.7188

32.5829

NMSE

-1,3106

-0.4465

MAE

13.8329

14.0146

NAE

1.5630

1.7447

SC

-1.6404

-2.6043

MD

61.9977

62.1794

IF

2.3106

1.4465

NCC

-0.6564

-0.1791

CQ

20.1031

20.5804

The above table consists of Co-relation, Visibility Difference, PSNR, SNR, MSE, NMSE, MAE, NAE, SC, MD, IF, NCC,

CQ. From the above table the Co-relation is near to ideal value in three level DWT So Pixel information similarity between reference and fused image is very high. But SNR is

REFERENCES

  1. A. Soma Sekhar, Dr.M.N.Giri Prasad" A Novel Approach Of Image Fusion on MR and CT Images Using Wavelet Transform" IEEE ,2011

  2. Wu Yiqi, You Jing Wu Guoping,"Multilevel and Multifocus Image Fusion Based on Multi-wavelet Transformation",978-1-4244-5273-6/09/$26.00 ©2009 IEEE

  3. F. Sadjadi, Comparative Image Fusion Analysais, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Volume 3, Issue , 20-26 June 2005

  4. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, "Image quality assessment: From error visibility to structural similarity, IEEE Transactions on Image Processing, vol. 13, no. 4, pp.600-612, Apr. 2004

  5. Smita AnilKulkarni, Dr. A.D. Kumbhar"Image Fusion On MR And CT Images" IJSER,2013

  6. Darshana Mistry, Asim Banerjee Discrete Wavelet Transform Using MATLAB IJCET,2013

  7. S. S. Bedi, Rati Khandelwal Comprehensive and Comparative Study of Image FusionIJSCE ,2013

  8. Shivsubramani Krishnamoorthy,K P Soman Implementation and Comparative Study of Image Fusion AlgorithmsInternational Journal of Computer Applications,2010

  9. Richa S., Mayank V., Afzel N., Multimodal Medical Image Fusion using Redundant Discrete Wavelet Transform, International Conference on Advances in Pattern Recognition, Feb. 2009.

  10. Yijian Pei, Jiang Yu, Huayu Zhou, Guanghui Cai, The Improved Wavelet Transform Based Image Fusion Algorithm and The Quality Assessment IEEE 2010

Leave a Reply