Analysis of Ringing Artifact in Image Fusion Using Directional Wavelet Transforms

DOI : 10.17577/IJERTCONV9IS03102

Download Full-Text PDF Cite this Publication

Text Only Version

Analysis of Ringing Artifact in Image Fusion Using Directional Wavelet Transforms

Ashish V. Vanmali

Tushar Katariay, Samrudhha G. Kelkarz, Vikram M. Gadrex

Dept. of Information Technology

Dept. of Electrical Engineering

Vidyavardhinis C.O.E. & Tech.

Indian Institute of Technology, Bombay

Vasai, Mumbai, India 401202

Powai, Mumbai, India 400076

AbstractIn the field of multi-data analysis and fusion, image fusion plays a vital role for many applications. With inventions of new sensors, the demand of high quality image fusion algorithms has seen tremendous growth. Wavelet based fusion is a popular choice for many image fusion algorithms, because of its ability to decouple different features of information. However, it suffers from ringing artifacts generated in the output. This paper presents an analysis of ringing artifacts in application of image fusion using directional wavelets (curvelets, contourlets, non-subsampled contourlets etc.). We compare the performance of various fusion rules for directional wavelets available in literature. The experimental results suggest that the ringing artifacts are present in all types of wavelets with the extent of artifact varying with type of the wavelet, fusion rule used and levels of decomposition.

Index TermsDirectional Wavelets, Image Fusion, Modified Structural Dissimilarity, Ringing Artifacts

  1. INTRODUCTION

    Fusion of complementary information from different source images is known as image fusion. In this digital age, there is a huge influx of data captured from multiple camera setting and/or sensors of the same object or scene being imaged. Each image captured, thus exhibits different features of data, with varying amounts of details of the objects. Combining these shreds of information from different images becomes imper- ative, as it helps in defining the big picture. For example, in medical applications, fusing Computerized Tomography (CT), Magnetic Resonance Imaging (MRI), Functional Magnetic Resonance Imaging (fMRI), Positron Emission Tomography (PET) etc., helps in the diagnosis of a disease in a reliable, efficient and quick manner. In surveillance, use of visible and infrared (IR) images is a common practice. High dynamic range (HDR) imaging involves fusion of differently exposed low dynamic range (LDR) images.

    The objective of image fusion is find one image which has more information about the scene than any of the source im- ages. The input data for image fusion algorithms is generally of two types:

    • Images taken from a single sensor but with differ- ent parameters of the imaging apparatus. Examples in- clude multi-focus images, multi-exposure images, multi- temporal images etc.

    • Images taken from multiple sensors. Examples include near infrared (NIR) images, IR images, CT, MRI, PET, fMRI etc.

      We can broadly classify the image fusion techniques into four categories:

      1. Component substitution based fusion algorithms [1][5]

      2. Optimization based fusion algorithms [6][10]

      3. Multi-resolution (wavelets and others) based fusion algo- rithms [11][15] and

      4. Neural network based fusion algorithms [16][19].

      Wavelet based multi-resolution analysis decouples data into low frequency (LF) and high frequency (HF) components at various scales. This allows for separate processing of LF and HF components, and gives more flexibility and freedom in designing better fusion algorithms. Also, the computational complexity is very low for wavelet analysis-synthesis filter banks. These advantages make wavelets popular for the image fusion applications. The wavelet based image fusion algo- rithms follow three simple steps:

      1. Decompose source images into LF and HF coefficients to form wavelet pyramids.

      2. Fuse LF and HF coefficients using the prescribed fusion rule to form a fused wavelet pyramid.

      3. Take inverse transform of the fused coefficients to get the fused image.

        One of the simplest fusion rule in wavelet base fusion is mean-max fusion. In mean-max fusion, the detail coefficient with the highest magnitude among two images is chosen as the detail wavelet coefficient of the fused image. This ensures maximum detail preservation in the fused image. The approximate wavelet coefficients are generated by av- eraging individual approximate wavelet coefficients. In more sophisticated algorithms, LF and HF coefficients are weighted based on the certain features like local energy, local entropy, matching degree, and so on. A study of such fusion rules is presented by B. Zhang in [20].

        Along with separable wavelet transform, use of non- separable wavelet transforms and other variants of wavelet transform is also a common practice in many image fusion applications. Singh and Khare [13] used Daubechies complex wavelet transform for multi-modal medical image fusion. At

        1. Multi-focus image 1 (b) Multi-focus image 2

      (c) Fused image (d) Zoomed part of (c)

      Fig. 1. Example of ringing artifacts

      the same time, non-subsampled contourlet transform (NSCT) is used by Bhatnagar et al. [12] for the fusion of multi- modal images. Wang et al. [14] used shearlet transform for decomposition of medical images. Upla et al. [15] used con- tourlet transform for fusion of panchromatic (PAN) and multi- spectral (MS) images in remote sensing applications. Malik et al. [21] has proposed a weight map based wavelet based multi- resolution fusion for the application of the multi-exposure image fusion. A general introduction of multi-resolution image fusion is provided by Piella in [11].

      However, the main drawback of wavelet based techniques is that, they suffer from ringing artifacts in the fused image [22], [23]. The analysis for the separable wavelets is presented in our previous work, Vanmali et al. [24] and Kelkar [25]. Two possible methods to compensate ringing artifacts for separable wavelets are also presented in our previous work, Vanmali et al. [24]. In this paper, we focus on the analysis of ringing artifacts in case of directional wavelets like curvelets, contourlets, non-subsampled contourlets, shearlets. Analysis of ringing artifacts at different levels of decomposition, using different fusion algorithms and for a variety of images is presented in this work.

  2. RINGING ARTIFACTS IN WAVELET BASED FUSION

    In digital image processing and signal processing ringing artifacts appear close to strong edges (high gradient value) or high transitions of a signal. Because of the oscillatory and fad- ing nature of these artifacts they are called ringing. In case of images, for white background, black oscillation are observed and for dark background, white oscillation are observed. An Example of ringing artifact in images is shown in Figure 1, where the fusion of two multi-focus images is performed using

    traditional mean-max fusion algorithm. It can be observed that the ringing artifacts are more prominent across the strong edges and not so visible around the weak edges. Also, the ringing artifacts are not perceivable around textures (hairs in the image) as textures themselves are oscillatory in nature. Even though ringing artifacts will be present in such areas, they will not be perceivable to the naked human eye because of smaller magnitude perturbations than background textures. Ringing artifacts intrinsically occur because the loss in HF information of asignal. In wavelet based image fusion, it is because of loss of the original HF coefficients of an image and subsequently substitution with other coefficients in that place. Preliminary analysis of ringing artifacts in wavelet based fusion is given by Dippel et al. in [22]. According to Dippel et al., in case of wavelet pyramid, there is a strong parent- child relationship among the coefficients termed as inter-scale correlation . In the fusion process, this relationship is altered, giving rise to the ringing artifacts. Also, the reconstruction pro- cess involves frequency sensitive high pass filtering operation, which further amplifies these ringing artifacts. These ringing

    artifacts are dominant for strong edges than weak edges.

    In our previous work, Vanmali et al. [24] and Kelkar [25] investigated more on this problem with thorough experimen- tation for separable wavelets to draw following observations:

    • The ringing artifact increases with the number of levels of decomposition, and then remains constant after a particular level of decomposition.

    • Ringing artifacts are more abrupt for smaller lengths of the filters.

    • Ringing artifacts are smoother for higher lengths of the filters.

    We now extend this work for the directional wavelets.

  3. EXPERIMENTAL SETUP

    ×

    ×

    For the analysis of artifacts in case of the directional wavelets, we use similar experimental setup as used in Vanmali et al. [24] and Kelkar [25]. We start with a standard test image, and form two multi-focus images, first with increasing blur from bottom to top and the second with increasing blur from top to bottom. An example of the input images so generated is shown in Figure 2. These multi-focus images are then fused using different fusion algorithms with varying levels of decomposition and the corresponding outputs are observed. Since, we are forming multi-focus images from the standard test image, it makes ground truth available, which can be used to compare the quality of fusion. The experiments were carried out for standard test images of Phantom, Peppers, Girlface, Lena, and Baboon, all of size 512 512 pixels. The Phantom images has constant gray level areas without any shading and texture. The Peppers images has variation in the shading with very low amount of texture. The images have increasing amount of texture from Peppers to Baboon.

    1. Ringing Measurement Metric: Modified Structural Dissim- ilarity (MSD)

      Structural Similarity Index (SSIM) [26] is one of the most popular full references image quality assessment tool. SSIM

      1. Original image (b) Multi-focus image 1 (c) Multi-focus image 2 Fig. 2. Input images for the experiments generated from a standard test image.

    (a) Lena Image (b) Edges by Canny edge detector (c) Mask after dilation of edges (d) Image after mask multiplica-

    tion

    Fig. 3. Modified SSIM metric for measurement of ringing artifacts

    is believed to be close to human visual system than traditional methods like mean square error (MSE) or peak signal to noise ratio (PSNR). It measures distortion in structures in fused image with reference to the original image. SSIM values ranges from 0 to 1, where 1 value is returned if the two images are same. When used directly to measure ringing, it is observed that, SSIM values were not consistent with the visual observations [25]. Therefore, SSIM was modified to measure the ringing artifacts.

    The ringing artifacts are more prevalent near the strong edges and have much smaller magnitudes near weak edges. In the highly textured areas, the ringing artifacts gets absorbed in the texture and hence, are not perceived visually. This phenomenon is called as texture masking [27]. Also, we are interested in the changes that has taken place in the fused image, compared to the original image. Therefore, we modify the SSIM metric as explained below and call it Modified Structural Dissimilarity (MSD) as used in [25].

    • Detect strong edges using Canny edge detector.

    • Dilate the detected edges on both sides to get a mask so that only areas surrounding strong edges are taken.

    • Multiply the mask with the original and the fused image.

    • Calculate SSIM of the fused image w.r.t. the original reference image in the masked regions.

    • Calculate MSD as

    The mean value of the MSD is taken as the amount of ringing artifacts present in the fused image. It was observed that, this modified metric gives values consistent with the visual perception of changes in ringing artifacts.

  4. DIRECTIONAL WAVELETS AND FUSION RULES UNDER CONSIDERATION

    Curvelet, contourlet, non-subsampled contourlet and shear- let transform will be used for analysis of ringing artifacts. All of these transforms are termed as directional wavelet trans- forms, because their basis functions are orientation dependent. In this section, we will give a brief overview of each transform used in analysis and list the different fusion rules used for analysis of ringing artifacts.

    1. Curvelet Transform

      The wavelet transform is good at representing only point singularities, but many natural images have curve singularities, which are not represented well by wavelets. With the objective to overcome these drawbacks the curvelet transform was proposed by Candes et al. [28]. In images, curvelets allows an almost optimal sparse representation of objects with curve singularities. For a smooth object f with discontinuities along

      C2-continuous curves, the best N -term approximation fN

      2

      2

      obeys ||f fN ||2 CN 2(logN )3, while wavelets decay

      MSD = 1 SSIM (1)

      The above steps are depicted in Figure 3 for understanding.

      at only N 1. As curvelets are defined in continuous domain,

      extending the algorithm to discrete data i.e. images is quite challenging. We dont have an exact representation of images

      in curvelet domain, rather it is the best approximation in digital domain, which is highly redundant.

      Curvelets were analyzed only for the mean-max fusion rule. This fusion rule is implemented for all directional wavelets so the performance of each one can be compared.

    2. Contourlet Transform (CT)

      The contourlet transform [29] is different from curvelets. Curvelets are defined in the continuous domain and then discretized for sampled data, whereas the contourlet is con- structed in the discrete domain and then its convergence prop- erties are studied in the continuous domain. Contourlets are constructed using non-separable filter banks. The performance of curvelets in representing directional geometry/features is better than contourlets. The drawback of this approach is that various artifacts occur when used in different applications like denoising or compression and also associated continuous domain theory is missing.

      For ringing analysis in the contourlet domain, two algo- rithms are implemented. First is the mean-max rule and second is proposed by Yang et al. in [30].

    3. Non-Subsampled Contourlet Transform (NSCT)

      NSCT proposed by Zhou et al. in [31], is an overcom- plete, shift-invariant and multi-directional image decompo- sition transform. NSCT is highly redundant as it does not contain up- and down-samplers which are present in CT. Due to removal of up and down-samplers, the design problem is less constrained than contourlets. NSCT performs better at the task of image denoising and image enhancement than curvelets and contourlets.

      Five different algorithms were implemented for analysis of ringing artifact in NSCT domain.

      1. Mean-max fusion rule

      2. CT and MR image fusion scheme in NSCT domain proposed by Ganasala and Kumar [32]

      3. Directive Contrast based Multimodal Medical Image Fu- sion in NSCT Domain proposed by Bhatnagar et al. [12]

      4. Multi-focus image fusion based on non-subsampled con- turlet transform and focused regions detection proposed by Li et al. [33]

      5. Multifocus image fusion using the non-subsampled con- tourlet transform proposed by Zhang et al. [34]

    4. Shearlet Transform (ST)

      2

      2

      The shearlet transform [35] is the only transform which has a unified theory in both continuous and digital domain, and can give an optimal sparse approximation of piecewise smooth images with singularities along smooth curves. Shearlets form an affine system, which parameterize directions by the slope, as compared to angles in contourlets and curvelets. This helps in simplified treatment in the digital domain, and also allows for an extensive theoretical framework. Also, the N – term approximation of shearlet coefficients is same as that of curvelets (||f fN ||2 CN2(logN )3).

      For the analysis of ringing artifacts, five fusion algorithms were implemented and their results compared.

      1. Mean-max fusion rule

      2. Multi-modality medical image fusion based on new fea- tures in NSST domain by Ganasala and Kumar [36]

      3. Feature-Motivated Simplified Adaptive PCNN-Based Medical Image Fusion Algorithm in NSST Domain by Ganasala and Kumar [37]

      4. Technique for image fusion based on NSST domain improved fast non-classical RF proposed by Kong et al. [38].

      5. A novel image fusion algorithm based on non-subsampled shearlet transform proposed by Yin et al. [39].

  5. RESULTS AND DISCUSSION

    For each fusion algorithm, we used the experimental setup as discussed in III. The levels of decomposition are varied from 1 to 5 as concluded in [24] and the fused outputs are observed. For quantitative analysis, the mean MSD values are recorded and plotted against levels of decomposition. For brevity, the results for the Girlface image with two inputs as shown in Figure 1, with 4 levels of decomposition using each fusion rule are shown in Figure 4.

    1. Curvelet Transform

      It is observed visually that, ringing artifacts are present across all the levels, when images are fused using curvelets. These artifacts increase with the levels of decomposition. For images with smooth gray levels, the artifacts are more perceivable than the one with texture. The plots for the mean MSD scores of curvelet based fusion for different images is shown in Figure 5. The plots show trends in line with the visual inspection, except for the Baboon image. For the Baboon image, the mean MSD values are highest among all the images indicating presences of maximum amount of ringing artifacts. However, these artifacts are not perceived visually because of the extremely high texture content in the image on account of texture masking [27].

    2. Contourlet Transform

      For contourlets, the ringing artifacts observed for both the fusion rules across all the levels of decomposition in the fused images. For mean-max fusion, initially the artifacts increased with the levels of decomposition up to level 3, and then remained same for the for the higher levels in most of the images. For the fusion rule proposed by Yang et al. [30], the ringing artifacts are almost unchanged across all the levels of decomposition except for level 1 and 2. Compared to the mean-max fusion, less ringing artifacts are observed for results of Yang et al. [30] for higher levels of decomposition. Results of Yang et al. [30] has better contrast and more details than that of mean-max fusion. Also, for both the rules, the amount of artifacts increases with the amount of texture. The quantitative analysis confirmed these trends. The mean MSD plots are shown in Figure 6.

      1. (b) (c)

        (d) (e) (f)

        (g) (h) (i)

        (j) (k) (l) (m)

        Fig. 4. Fusion results for the Girlface image using different fusion rules. (a) For curvelet transform: Mean-max fusion. (b) and (c) For contourlet transform: Rule 1- Mean-max fusion, Rule 2- Yang et al. [30]. (d) to (h) For NSCT. L to R: Rule 1- Mean-max fusion, Rule 2- Ganasala et al. [32], Rule 3- Bhatnagar et al. [12], Rule 4- Li et al. [33], Zhang et al. [34]. (i) to (m) For shearlet transform: Rule 1- Mean-max fusion, Rule 2- Ganasala et al. [36], Rule 3- Ganasala et al. [37], Rule 4- Kong et al. [38], Rule 5- Yin et al. [39].

    3. Non-Subsampled Contourlet Transform

      In the visual inspection, ringing artifacts are seen for all the rules used for NSCT. Mean-max fusion and fusion with Ganasala et al. [32] shows an increase in the ringing artifacts with increase in the levels of decomposition in most of the images. For Zhang et al. [34] a slight decrease is observed. For Bhatnagar et al. [12] and Li et al. [33] significant decrease was observed in the ringing artifacts with increase in the levels of decomposition. For higher levels. the outputs of Li et al.

      1. were very close to the original image.

        The plots of the mean MSD scores for NSCT are shown in Figure 7. Here, the mean-max fusion has the least scores. One can observe huge improvement in the mean MSD scores for Bhatnagar et al. [12] and Li et al. [33] in all the images. Both these algorithms have mean MSD score very close to the mean-max fusion. However, when observed visually, the outputs of Bhatnagar et al. [12] and Li et al. [33] are much better than the mean-max fusion.

        0.25

        0.2

        MSD ->

        MSD ->

        0.15

        0.1

        0.05

        0

        1 2 3 4 5

        Levels of decomposition ->

        Phantom Peppers

        Girlgace Lena Baboon

        Hence, NSCT can be the preferred choice for the image fusion using directional wavelets.

        REFERENCES

        1. T.-M. Tu, S.-C. Su, H.-C. Shyu, and P. S. Huang, A new look at ihs-like image fusion methods, Information fusion, vol. 2, no. 3, pp. 177186, 2001.

        2. V. P. Shah, N. H. Younan, and R. L. King, An efficient pan-sharpening method via a combined adaptive pca approach and contourlets, IEEE transactions on geoscience and remote sensing, vol. 46, no. 5, pp. 1323 1335, 2008.

        3. V. D. Calhoun and T. Adal, Feature-based fusion of medical imaging data, IEEE Transactions on Information Technology in Biomedicine,

          Fig. 5. Plots of mean MSD values for fusion using curvelet transform

          vol. 13, no. 5, pp. 711720, 2009.

        4. S. Daneshvar and H. Ghassemian, Mri and pet image fusion by combining ihs and retina-inspired models, Information Fusion, vol. 11,

    4. Shearlet Transform

    no. 2, pp. 114123, 2010.

      1. F. Palsson, J. R. Sveinsson,

        M. O. Ulfarsson, and J. A. Benedikts-

        For shearlets, except for Ganasala et al. [37], all the rules show an increase in the ringing artifacts with levels of de- composition. For Kong et al. [38], the outputs have lesser ringing artifacts visually, whereas Ganasala et al. [37] have more details in the fused output. The plots of the mean MSD scores for NSCT are shown in Figure 8. In these plots, one can see the mean-max fusion has least score for all the images. But, similar to NSCT, it has more ringing artifacts visually compared to Ganasala et al. [37] and Kong et al. [38].

        E. Comparison among different transforms

        To compare performance of different directional wavelets, we compared the mean MSD plots of different images and different levels of decomposition. The comparison is made in two ways. In the first case, we compared the performance with only mean-max fusion rule. In the second case, we selected the best performing fusion rule for each transform and then compared their performance. For curvelets it is mean-max fusion rule; for contourlets it is Yang et al. [30]; for NSCT it is Li et al. [33]; and for shearlet it is Kong et al. [38]. The plots of these comparisons are shown in Figure 9. From these plots, it is clearly observed that NCST can be the preferred choice for fusion among the different directional wavelets to have less ringing artifacts in the final fused results. At the same tme, contourlets and shearlets exhibit high amount of ringing artifacts in the image fusion.

  6. CONCLUSION

The analysis of ringing artifacts for directional wavelets like curvelets, contourlets, non-subsampled contourlets and shearlets is presented in this paper. The experimental results confirmed that the ringing artifacts are unavoidable in the process of wavelet based images fusion. The degree of artifacts vary based on the fusion rule, levels of decomposition and amount of texture in the images. In most of the directional wavelets, the artifacts increase with the increase in the levels of decomposition, except for a few fusion rules employing NSCT. Also, in most of the images, the artifacts increase with the texture and edge strength. Among the different directional wavelets, NSCT exhibits less amount of ringing artifacts.

son, Model-based fusion of multi-and hyperspectral images using pca and wavelets, IEEE transactions on Geoscience and Remote Sensing, vol. 53, no. 5, pp. 26522663, 2015.

    1. M. Xu, H. Chen, and P. K. Varshney, An image fusion approach based on markov random fields, IEEE Transactions on Geoscience and Remote Sensing, vol. 49, no. 12, pp. 51165127, 2011.

    2. K. Kotwal and S. Chaudhuri, An optimization-based approach to fusion of hyperspectral images, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 5, no. 2, pp. 501509, 2012.

    3. M. Mo¨ller, T. Wittman, A. L. Bertozzi, and M. Burger, A variational approach for sharpening high dimensional images, SIAM Journal on Imaging Sciences, vol. 5, no. 1, pp. 150178, 2012.

    4. J. Saeedi and K. Faez, Infrared and visible image fusion using fuzzy logic and population-based optimization, Applied Soft Computing, vol. 12, no. 3, pp. 10411054, 2012.

    5. J. Ma, C. Chen, C. Li, and J. Huang, Infrared and visible image fusion via gradient transfer and total variation minimization, Information Fusion, vol. 31, pp. 100109, 2016.

    6. G. Piella, A general framework for multiresolution image fusion: from pixels to regions, Information fusion, vol. 4, no. 4, pp. 259280, 2003.

    7. G. Bhatnagar, Q. J. Wu, and Z. Liu, Directive contrast based multi- modal medical image fusion in NSCT domain, IEEE transactions on multimedia, vol. 15, no. 5, pp. 10141024, 2013.

    8. R. Singh and A. Khare, Fusion of multimodal medical images using daubechies complex wavelet transforma multiresolution approach, Information Fusion, vol. 19, pp. 4960, 2014.

    9. L. Wang, B. Li, and L.-F. Tian, Eggdd: An explicit dependency model for multi-modal medical image fusion in shift-invariant shearlet transform domain, Information Fusion, vol. 19, pp. 2937, 2014.

    10. K. P. Upla, M. V. Joshi, and P. P. Gajjar, An edge preserving mul- tiresolution fusion: Use of contourlet transform and mrf prior, IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 6, pp. 32103220, 2015.

    11. W. Huang and Z. Jing, Multi-focus image fusion using pulse coupled neural network, Pattern Recognition Letters, vol. 28, no. 9, pp. 1123 1132, 2007.

    12. S. Das and M. K. Kundu, A neuro-fuzzy approach for medical image fusion, IEEE Transactions on Biomedical Engineering, vol. 60, no. 12, pp. 33473353, 2013.

    13. N. Wang, Y. Ma, and K. Zhan, Spiking cortical model for multifocus image fusion, Neurocomputing, vol. 130, pp. 4451, 2014.

    14. Z. Wang, S. Wang, Y. Zhu, and Y. Ma, Review of image fusion based on pulse-coupled neural network, Archives of Computational Methods in Engineering, pp. 113, 2015.

    15. B. Zhang, Study on image fusion based on different fusion rules of wavelet transform, in 2010 3rd International Conference on Advanced Computer Theory and Engineering (ICACTE), vol. 3, Aug 2010, pp. V3649V3653.

    16. M. H. Malik, S. A. M. Gilani, and A. ul Haq, Wavelet based exposure fusion, in Proceedings of the World Congress on Engineering 2008 Vol I, WCE 08, July 2 – 4, 2008, London, U.K. International Association of Engineers, 2008, pp. 688693.

      0.35

      0.3

      0.25

      MSD ->

      MSD ->

      0.2

      0.15

      0.1

      0.05

      0.3

      0.25

      MSD ->

      MSD ->

      0.2

      0.15

      0.1

      0.05

      Phantom Peppers Girlgace Lena Baboon

      0

      1 2 3 4 5

      Levels of decomposition ->

      1. Rule 1- Mean-max fusion

        0

        1 2 3 4 5

        Levels of decomposition ->

      2. Rule 2- Yang et al. in [30]

        Fig. 6. Plots of mean MSD values for fusion using contourlet transform

        MSD ->

        MSD ->

        0.08

        0.16

        0.115

        0.075

        0.15

        0.11

        0.105

        0.07

        0.14

        0.1

        0.065

        0.13

        0.095

        0.06

        0.12

        0.09

        0.055

        0.11

        0.085

        0.08

        0.05

        0.1

        0.045

        0.09

        0.075

        0.07

        0.08

        0.16

        0.115

        0.075

        0.15

        0.11

        0.105

        0.07

        0.14

        0.1

        0.065

        0.13

        0.095

        0.06

        0.12

        0.09

        0.055

        0.11

        0.085

        0.08

        0.05

        0.1

        0.045

        0.09

        0.075

        0.07

        0.04

        1 2 3 4 5

        Levels of decomposition ->

        1. For Phantom image

          MSD ->

          MSD ->

          0.08

          1 2 3 4 5

          Levels of decomposition ->

        2. For Peppers image

          MSD ->

          MSD ->

          0.065

          1 2 3 4 5

          Levels of decomposition ->

        3. For Girlface image

          0.15 0.42

          Rule 1

          Rule 2

          0.4 Rule 3

          0.14 Rule 4

          0.13

          MSD ->

          MSD ->

          0.12

          0.11

          0.1

          0.09

          1 2 3 4 5

          Levels of decomposition ->

        4. For Lena image

          0.38

          0.36

          MSD ->

          MSD ->

          0.34

          0.32

          0.3

          0.28

          0.26

          0.24

          1 2 3 4 5

          Levels of decomposition ->

          (e) For Baboon image

          Rule 5

          Fig. 7. Plots of mean MSD values for fusion using non-subsampled contourlet transform. Here, Rule 1- Mean-max fusion, Rule 2- Ganasala et al. [32], Rule 3- Bhatnagar et al. [12], Rule 4- Li et al. [33], Rule 5- Zhang et al. [34]

    17. S. Dippel, M. Stahl, R. Wiemker, and T. Blaffert, Multiscale contrast enhancement for radiographies: Laplacian pyramid versus fast wavelet transform, IEEE Transactions on Medical Imaging, vol. 21, no. 4, pp. 343353, 2002.

    18. R. Fattal, Edge-avoiding wavelets and their applications, ACM Trans. Graph., vol. 28, no. 3, pp. 110, 2009.

    19. A. V. Vanmali, T. Kataria, S. G. Kekar, and V. M. Gadre, Ringing artifacts in wavelet based image fusion: Analysis, measurement and remedies, Information Fusion, vol. 56, pp. 39

      69, 2020. [Online]. Available: http://www.sciencedirect.com/science/ article/pii/S1566253517304748

    20. S. Kelkar, Ringing artifacts in image fusion: analysis and remedies,

      M.Tech. Thesis, IIT Bombay, 2015.

    21. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, Image quality assessment: from error visibility to structural similarity, IEEE

      transactions on image processing, vol. 13, no. 4, pp. 600612, 2004.

    22. X. Feng and J. P. Allebach, Measurement of ringing artifacts in jpeg images, vol. 6076, 2006, pp. 60 760A60 760A10. [Online].

      Available: http://dx.doi.org/10.1117/12.645089

    23. E. Candes, L. Demanet, D. Donoho, and L. Ying, Fast discrete curvelet transforms, Multiscale Modeling & Simulation, vol. 5, no. 3, pp. 861 899, 2006.

    24. M. N. Do and M. Vetterli, The contourlet transform: an efficient directional multiresolution image representation, IEEE Transactions on image processing, vol. 14, no. 12, pp. 20912106, 2005.

    25. L. Yang, B. Guo, and W. Ni, Multifocus image fusion algorithm based on contourlet decomposition and region statistics, in Image and Graphics, 2007. ICIG 2007. Fourth International Conference on. IEEE, 2007, pp. 707712.

    26. J. Zhou, A. L. Cunha, and M. N. Do, Nonsubsampled contourlet

      0.22

      0.2

      0.18

      0.16

      MSD ->

      MSD ->

      0.14

      0.12

      0.1

      0.08

      0.06

      0.2

      Rule 1

      Rule 2

      Rule 3

      Rule 4

      Rule 5

      Rule 1

      Rule 2

      Rule 3

      Rule 4

      Rule 5

      0.18

      0.16

      MSD ->

      MSD ->

      0.14

      0.12

      0.1

      0.16

      Rule 1

      Rule 2

      Rule 3

      Rule 4

      Rule 5

      Rule 1

      Rule 2

      Rule 3

      Rule 4

      Rule 5

      Rule 1

      Rule 2

      Rule 3

      Rule 1

      Rule 2

      Rule 3

      0.15

      Rule 4

      Rule 4

      Rule 5

      Rule 5

      0.14

      0.13

      MSD ->

      MSD ->

      0.12

      0.11

      0.1

      0.09

      0.08

      0.07

      0.04

      1

      2 3 4

      Levels of decomposition ->

      (a) For Phantom image

      0.08

      5 1

      2 3 4

      Levels of decomposition ->

      (b) For Peppers image

      0.06

      5 1

      2 3 4 5

      Levels of decomposition ->

      1. For Girlface image

        Rule 1

        Rule 2

        Rule 3

        Rule 4

        Rule 5

        Rule 1

        Rule 2

        Rule 3

        Rule 4

        Rule 5

        0.2 0.6

        Rule 1

        Rule 2

        0.55 Rule 3

          1. Rule 4

            Rule 5

            0.5

            0.16

            MSD ->

            MSD ->

            MSD ->

            MSD ->

            0.45

            0.14

            0.4

            0.12

            0.35

            0.1 0.3

            0.08

            1

            2 3 4

            Levels of decomposition ->

            (d) For Lena image

            0.25

            5 1

            2 3 4 5

            Levels of decomposition ->

            (e) For Baboon image

            Fig. 8. Plots of mean MSD values for fusion using shearlet transform. Here, Rule 1- Mean-max fusion, Rule 2- Ganasala et al. [36], Rule 3- Ganasala et al. [37], Rule 4- Kong et al. [38], Rule 5- Yin et al. [39].

            0.16

            0.15

            0.14

            0.12

            Curvelet Contourlet NSCT

            Shearlet

            Curvelet Contourlet NSCT

            Shearlet

            Curvelet

            Contourlet NSCT

            Shearlet

            Curvelet

            Contourlet NSCT

            Shearlet

            0.11

            0.13

            0.12

            0.1

            MSD ->

            MSD ->

            MSD ->

            MSD ->

            0.11 0.09

            0.1

            0.09

            0.08

            0.08

            0.07

            0.07

            0.06

            1 2 3 4 5

            Levels of decomposition ->

            0.06

            1 2 3 4 5

            Levels of decomposition ->

            1. Mean-max fusion (b) Best fusion rule

        Fig. 9. Plots of mean MSD values for fusion using contourlet transform

        transform: construction and application in enhancement, in IEEE In- ternational Conference on Image Processing 2005, vol. 1. IEEE, 2005, pp. I469.

    27. P. Ganasala and V. Kumar, Ct and mr image fusion scheme in non- subsampled contourlet transform domain, Journal of digital imaging, vol. 27, no. 3, pp. 407418, 2014.

    28. H. Li, Y. Chai, and Z. Li, Multi-focus image fusion based on non- subsampled contourlet transform and focused regions detection, Optik- International Journal for Light and Electron Optics, vol. 124, no. 1, pp. 4051, 2013.

    29. Q. Zhang and B.-l. Guo, Multifocus image fusion using the nonsub- sampled contourlet transform, Signal Processing, vol. 89, no. 7, pp. 13341346, 2009.

    30. G. Kutyniok, W.-Q. Lim, and X. Zhuang, Digital shearlet transforms, in Shearlets. Springer, 2012, pp. 239282.

    31. P. Ganasala and V. Kumar, Multimodality medical image fusion based on new features in nsst domain, Biomedical Engineering Letters, vol. 4, no. 4, pp. 414424, 2014.

    32. , Feature-motivated simplified adaptive pcnn-based medical image fusion algorithm in nsst domain, Journal of digital imaging, vol. 29, no. 1, pp. 7385, 2016.

    33. W. Kong and J. Liu, Technique for image fusion based on nsst domain improved fast non-classical rf, Infrared Physics & Technology, vol. 61, pp. 2736, 2013.

    34. M. Yin, W. Liu, X. Zhao, Y. Yin, and Y. Guo, A novel image fusion algorithm based on nonsubsampled shearlet transform, Optik- International Journal for Light and Electron Optics, vol. 125, no. 10, pp. 22742282, 2014.

Leave a Reply