Phase Congruency based Multimodal Image Fusion using Adaptive Histogram in NSCT Domain

Download Full-Text PDF Cite this Publication

Text Only Version

Phase Congruency based Multimodal Image Fusion using Adaptive Histogram in NSCT Domain

Jisi K. J.,

Student(M.Tech)

Department of Electronics and Communication, T.John Institute of Technology, Bangalore,Karnataka,India,

Mrs.Adepu Parimala,

Assistant Professor,

Departrment of Electronics and Communication, T.John Institute of Technology, Bangalore,Karnataka,India

Deepak Vijay, Project Manager Arvin Technologies,

Kalamassery, Kerala, India

AbstractMultimodal image fusion is a powerful tool in clinical applications such as nonvasive diagnosis, image guided radiotherapy and treatment planning. It has been developed with various imaging modalities in medical imaging and its main aim is to capture more relevant informations from sources to a single output. So a fusion framework is proposed for multimodal images on non sub sampled counter let transform which is proposed based on theory of counter let transform. It has many advantages of counter let transform and it effectively overcomes the wavelet transform and pseudo Gibbs phenomena. In this technique the source image is initially transformed by NSCT by fusing the low and high frequency components. In this paper, mainly two fusion rules are used-phase congruency and directive contrast. The phase congruency is used for low frequency components for improving its contrast and brightness. The directive contrast is used for high frequency which effectively determines the frequency coefficients from the clear pars in high frequencies. Finally fused image is constructed by inverse NSCT with all relevant coefficients. The image enhancement is done by using adaptive histogram equalization and image denoising is done by using Weiner filtering. The combination of these techniques can preserve more details of fused image and can improve the quality of fused images. Multimodal image fusion is a possible way to integrate complementary informations from multiple images to a single output. The image fusion not only obtain an accurate and complete description of same target ,but also reduces the randomness and redundancy to increase clinical applicability.

Keywords Non subsampled counterlet transform,directive contrast,phase congruency,adaptive histogram equalization,weiner filtering.

  1. INTRODUCTION

    In recent years, medical imaging has increasing attention due to its vital component in medical diagnostics and treatment. However each imaging modality reports on a restricted domain and provides information in limited domains that some are common and some are unique. For example, different types of imaging techniques such as X-ray,

    computed tomography (CT), magnetic resonance imaging (MRI), magnetic resonance angiography (MRA) etc, provide limited information where some information is common and some are unique. For example-ray and CT can provide dense structures like bones and implants with less distortion, but it cannot detect physiological changes. Similarly, normal and pathological soft tissue can be better visualized by MRI image whereas PET can be used to provide better information on blood flow and activity with low spatial resolution. As a result, the anatomical and functional medical images are needed to be combined for a clear view purpose. The image fusion technique has been categorized into three, which include pixel level, feature level and decision level fusion where multimodal medical image fusion usually employs the pixel level fusion due to advantage of containing the original measured quantities. Here we are doing the image fusion in NSCT domain using two different fusion rules-phase congruency and directive contrast. Multimodal medical image fusion not only helps in diagnosing diseases, but it also reduces the storage cost by reducing storage to a single fused image instead of multiple source images, the multimodal medical image fusion has been identified as a promising solution which aims to integrating information from multiple modality images to obtain a more complete and accurate description of same object.The existing multiscale method ie; the wavelet transform is good at isolated discontinuities, but not good at edges and textured region. But the proposed technique gives a clear view even at edges and textured regions of the images by using direct contrast fusion rule. It also captures limited directional information along vertical, horizontal and diagonal direction. These issues are rectified in a recent multiscale decomposition contour let, and its non- sub sampled version. The existing system is not purely 2- D.But Contour let is a true 2-D sparse representation for 2- D signals like images where sparse expansion is expressed by contour segments. As a result, it can capture 2-D geometrical structures in visual information much more effectively than

    traditional multiscale methods.To reduce noise effect in image denoising and adaptive histogram equalization is done to obtain clear image output. . Other important applications of the fusion of images include medical imaging, microscopic imaging, remote sensing, computer vision, and robotics.

  2. MULTIMODAL IMAGE FUSION

    1. Phase Congruency

      The phase congruency is the fusion rule used here to fuse the low frequency coefficients of the input images, which produces the contrast and brightness invariant representation of the low frequency coefficients of the image [1]. It provides the luminance and contrast invariant feature extraction in low frequency coefficients of the image. The phase congruency is mainly used in the feature perception of the image based on local energy model. How it is done is-initially the features of the low frequency sub images are extracted by the phase congruency extractor. Then the low frequency coefficients are fused for feature perception by local energy model.

    2. Directive Contrast

      The high frequency coefficient of an image usually includes the detailed components of the source image. It is to be noted that the noise also includes in high frequency coefficients of an image which may eventually lead to increase the distortion of the image. To avoid these kinds of problems the directive contrast based fusion rule is proposed for fusing the high frequency coefficients of the input sub images. The directive contrast in NSCT is applied to each and every point and orientation in the input sub images to produce the high frequency coefficients of the image [2]. Then the fusion of high frequency coefficients is performed.

    3. Weiner Filtering

      Weiner filtering is a general way of finding the best reconstruction of a noisy signal. Weiner filtering is better since it gives the best (in L2 norm) reconstruction of original signal [3]. The goal of Weiner filter is to filter out the noises that have corrupted the signal. It is based on a statistical approach. Typical filters are designed for a desired frequency response. However, the design of the Weiner filter takes a different approach. One is assumed to have knowledge of the spectral properties of the original signal and the noise and one seeks the linear time-invariant filter whose output would come as close to the original signal as possible.

    4. Adaptive Histogram Equalization

    Adaptive histogram equalization (AHE) is a computer image processing technique used to improve contrast in the images. It differs from ordinary histogram equalization in the respect that the adaptive method computes several histograms, each corresponding to a distinct section of the image and uses them to redistribute the lightness [4] values of the image. It is therefore suitable for improvingthe local contrast of an

    image and bringing out more details.AHE has a tendency to over amplify noise in relatively homogeneous regions of an image. Adaptive Histogram Equalization improves on this by transforming each pixel with a transformation function derived from a neighborhood region. When the image region containing a pixels neighborhood is fairly homogeneous, its histogram will be strongly peaked and the transformation function will map a narrow range of pixel values to the whole range of the result image. This causes AHE to over amplify small amounts of noise in largely homogeneous regions of the image.

    E.Design Implementation

    Multimodal image fusion is used for combining many source images into single output. The fusing of modal images are done by using above techniques. This project is programmed in MATLAB and is implemented in Raspberry pi.Raspberry pi is a credit card based computer that plugs into TV and a keyboard. The original Raspberry pi is based on the Broadcom BCM2835 system on a chip which includes an ARM 1176JZF-S700Mhz processor.

  3. APPLICATIONS

    The main aim of this project is to capture most relevant information from sources into a single output, which plays an important role in medical diagnosis. Other important applications of the fusion of images include medical imaging, microscopic imaging, remote sensing, computer vision, and robotics.

  4. FUTURE WORK

    In this project we are doing multimodal image fusion by which more relevant information from a single image is obtained by combining more than one images using image fusion technique. In future this technique with modifications can be used to find the age of fossils with the fossil images other than original fossil. That means with a fossil image we can find the age of fossil using the advanced version of the techniques used in this paper.

  5. .REFERENCES

  1. Q. Zhang and B. L. Guo, Multifocus image fusion using the nonsubsampled contour let transform and phase congruency, Signal Process., vol. 89, no. 7, pp. 13341346, 2009.

  2. G. Bhatnagar and B. Raman, A new image fusion technique based on directive contrast, Electron. Lett. Comput. Vision Image Anal., vol. 8,no. 2, pp. 1838, 2009.

  3. The University of Texas at Austin, CS 395T, spring 2008, Prof. William H. Press.

  4. M. Eramian, Mould.Histogram equalization using neighborhood metrics, Proceedings. The 2nd Canadian Conference on Computer and Robot Vision, 2005, 397~404.

Leave a Reply

Your email address will not be published. Required fields are marked *