Image Gradient For Fusion Of Multi-Focus Images

DOI : 10.17577/IJERTV2IS3688

Download Full-Text PDF Cite this Publication

Text Only Version

Image Gradient For Fusion Of Multi-Focus Images

A. Anish

Department of Information Technology, Karunya University

T. Jemima Jebaseeli

Department of Information Technology, Karunya University

Abstract

This paper presents a hybrid approach for fusingenhanced depth of field pictures using differently focused images. This method uses spatial image gradients as focus measure and soft decisionwhich enables smooth transitions across region boundaries. The key feature of this method is its robustness for overcoming noise and other optical effects.A Graphical User Interface(GUI) is developed for image fusion mainly for research purpose. To utilize the GUI function without MATLAB software, an executable standalone application is also developed.

Keywords: Image gradients, Focusinformation, Depth of Field, Soft decision, optics.

  1. Introduction

    Image fusion is a sub-field of image processing wheretwo or more photographs of the same scene are fused into a single image which has more useful information and is more suitable for visual perceptual experience.Image fusion is implemented inseveral areas such us, photography applications, medical science, forensic, Remote sensing and military, etc. We categorize the fusion with respect to the type of images used such as visible, Infrared, etc.and also based on the purpose of fusion. Based on these factors fusion can be classified as multi- view, multi-modal, multi-temporal, Multi-focus fusion and fusion for restoration.

    Lately, many multi-focus image fusion methods have been introduced. In general, these methods can be classified into two groups: spatial domain, transformed domain[3]. In spatial domain techniques fusion directly takes place on the pixel values. But in transformed domainthe images are transformed into multi resolution components.Image fusion is generally carried out at

    four different levels: signal level, pixel level, feature, and decision level[5].

    In signal-based fusion, signals from different cameras are fused to create a new signal which has a better SNR value than the original signal. Whereas pixel level while generating fused image the pixel values are based on the source image.Featurebased fusion requires the separation of various features of the source images.And thefusion process is based on those extracted features of the source images. In decision level fusion multiple algorithms are combined to get the final fused image. Then the obtained information is then combined by applying the decision rules.[1] There are various image fusion methods that are developed for image fusion, they are:

    1. Intensity-Hue-Saturation (IHS) Transform

    2. Principal Component Analysis (PCA)

    3. Arithmetic Combinations

    4. Multi-Scale Transform Based Fusion

    5. Total Probability Density Fusion

    6. Biologically Inspired Information Fusion

    However, all these fusion techniques blur the sharp edges or leave the blurring effects in the fused image. The key challenge of multi-focus image fusion is to obtainthe fused image without blurring.

  2. Multi-Focus Image Fusion

    Multi-focus image fusion is the process of mergingof two or more images of thesame scene into a single sharp image[1].The fused image is more informative and is more suitable for visual perceptual experience and for processing.

    The first step for multi-focus image fusion algorithm is to calculate the focused region of the source images. For multi-focus image fusion, many distinctive focus measurements are used[2], whichcan measure the changesin pixel frequency.

    When the source images are compared, pixels with greater values of these measurements, are considered to be in focus and selected as the pixels of the fused image.Once the focus measure is done, there are different fusion rules can be applied for image fusion.

  3. Studyon Image Fusion methods

    A study has been done on a various multi-focus image fusion methods and their pros and cons are listed below.

    A Non Sub-sampled Contourlet Transform:This method combines the non-subsampled pyramids

    [17] and non-subsampled digital filter banks and it is a shift-invariant version of the Contourlet transform (CT) for fusing multi-focus images. Even though this method is efficient and can work in real-time system platform, exact reconstruction of the fused image is not possible.

    PCNN(Pulse Coupled Neural Network) method: PCNN is a biologically inspired[19] neural network based image fusion technique in which each neuron corresponds to the pixel of the input image.Compared with formal methods, they have several significant advantages such as hardiness against noise, independence of geometric variations in input patterns, capability of bridging minor intensity variations in input patterns, etc. However, those methods are still complicated and time- consuming.

    Wavelet-based statistical sharpness measure:This method uses the spreading of wavelets co-efficient to determine to amount of blur in the input image. It uses two Laplacian mixture models and three metrics Chi-square, Kolmogorov and Kullback- Lieble[18] to compare the empirical Probability Density Function(PDF) with the wavelet co- efficients. Eventhough the fused image yields better quality, this approach yields higher computational complexity.

    Image matting for fusion of multi-focus images in the dynamic scenes: This method uses image matting technique to combine the focus information and correlation between neighbouring pixels [12]. It has three steps: First, morphological filteringis performed on each source image to measure the focus. Then, the focus information is forwarded to image matting to find the focused object accurately. At last, the obtained focused objects of different source images are fused together to construct the fused image.

    Image fusion scheme using focused region detection and multiresolution: Integrating the advantages of spatial domain based fusion methods and transformed domainbased fusion methods, this

    technique of focused region detection and a new fusion method of Multi-Scale Transform (MST) to guide pixel combination has been used.

    PCAfor image fusion:The PCA uses a mathematical procedure to transform the number of correlated variables into a number of uncorrelated variables called the principal components.In this method, the two dimensional discrete cosine transform of multiple input images are calculated.Theobtained measurements are multiplied with sampling filter, so compressed images are obtained. Then the inverse discrete cosine transform is performed. Finally, PCA fusion[8] method is used to get the fused image. However the fused image obtained by this method has a little quality loss.

    Fusion using Index of Fuzziness:This method uses Index of Fuzziness as a focus measure to fuse images. A focus measure is applied to measure the information level in the portions of the images or in the images as a whole. Various measures like energy of Image Gradient(EOG), Energy of Laplacian of Image(EOL), [15]contrast visibility, etc. are used.The algorithm consists of the following steps: First the source images are decomposed into a number of blocks. Then compute the index of Fuzziness for each block. Finally the computed values of each blocksis compared and the highest index values are taken as the focused regions. This method shows artefacts on the fused image.

  4. Proposed Fusion Method

    In this scheme, the fusion output is obtained by decomposing the input image using wavelets and then calculating the focus measure of the two input images using Image Gradient algorithm.

    Fig. 1 Gradient Image Fusion Algorithm

    1. Wavelet Transform

      For the Discrete wavelet transform various types of wavelet functions areused. These wavelets are orthogonal or bi-orthogonal and they are characterized by a number of lowpass, high-pass analysis and synthesis filter banks [6].From these filter banks a wavelet function (t) and scaling function (t) can bederived.Some typically used categories for the DWT are Daubechies, coiflets, Haar, symlets and Bi-orthogonal.

      Daubechiesare orthogonal wavelet which has the scaling function of order 1 to8. Coiflets are also orthogonal wavelet which are more symmetric and also have more vanishing moments. Symlets are compactly supported orthogonal wavelets have the scaling functions of the order 2 to 8. Symlets are symmetric in nature and have the properties similar to that of the daubechies.bi-orthogonal family of wavelets contains thebi-orthogonal splinesandexact reconstruction is possible with this type of wavelets.

    2. Calculation of the Decision Map

      The next step in the algorithm is to obtain the focus Information Map (Decision Map)for the source images. In order to obtain the Decision Map morphological filters are used to measure the high frequency information. The procedure for calculatingthe decision map has two steps: first is to define the classes according to the discrimination features and second is to setthe procedure for the partitioning. In anideal classification process,the classes have few probability distribution functions called the priori.

      An active contour model is selected for the partioning process in which { i}i=1,…,K is a family of sets and K is the number of input channels [15]. Based on the type of fusion process

      { i}i=1,…,K has to cover the whole image. Finally this segmentation process becomes a minimization problem which contains three conditions:

      (i)Partion condition: [9]

      FA({i}) =/2(1iH(i(x))2dx,(1)

      where H is the Heaveside function. (ii)Length shortening:[9]

      FB({i}) = ((i(x))|i |dx, (2)

      Fig. 2aInput image Fig. 2b Calculated Decision

      Map

      Furthermore, most of the methods for autofocusing are global or semi-global in scale.[16] Hence, corroboration from neighbouring pixels of decision choices becomes necessary to maintain robustness of the algorithm. Adding this corroboration while maintaining pixel-level decisions requires summing the M(x; y)'s over a k £ k region surrounding each decision-point. This yields a new focus measure.

      (3)

      where i, j correspond to two differently focusedimages. Thus, M(x; y) > 0 indicates the pixel value at location (x; y) in image i should be chosen else wechoose j.

    3. Majority Filtering

      The uses of Image Gradients in the fusion process can make decisions vulnerable to large fluctuations dependent on the sensor, optics and scene. Therefore in order to maintain the robustness, it is necessary to make decision choices from neighbouring pixels.So a sigmoid function is applied to the resultant focus measure.[16]

      (x, y) =1 / (1 + e- (x,y))(4)

      whereis a constant.

    4. Image Gradient Fusion

      The Final stage of the fusion process is to construct the fused image by fusing the focused region of the two source images. In order to obtain the focused area of the image, the obtained decision map[12]is compared with the source images. More specifically, each part of the image is compared with the obtained decision map pixel by pixel in order to get the high frequency information.Then the average of both the image are calculated based on the Decision Map. Eventually, based on the assumption the obtained fused image is the All-in- focus image.

  5. GUI For Fusion

    A GUI (Graphical User Interface) is a pictorial interface to a program which helps the user to do tasks interactively with the use of controls such as slidersand buttons[10]. In MATLAB R2010a GUI tools enables us to the perform tasks such as creating and customizing plots, Fitting curves & surfaces and also to analyseand filter signals.

    1. GUIfor Image Fusion

      A GUI environment is designed for the proposed fusion method usingMATLABguide.The environment has buttons to load the input images as well as to select the fusion method. Once the image is loaded, the fused image can be obtained by pressing the fuse button. The Fused image can also be saved as .bmp image file.

      Fig. 3 Image Fusion GUI

  6. Experimental Results

    To present the effectiveness of the proposed fusion method, the method is compared with many image fusion techniques and applied overa pair of multi-focus opticalas well as multi-focus medical images. With respect to the quality of fused image, it is seen that the proposed gradient fusion method yields the sharper fused image.

    Figure.4 (a) is the Fly image with front part being focused and Figure.5 (b) is the Fly image with back part being focused

    Fig. 4(a) Input image 1 Fig. 4(b)Input image 2

    Fig. 4(c) Proposed method Fig. 4(d) SWT fusion

    Fig. 4(e) PCA method Fig. 4(f) Morphological pyramid

    By carrying out the fusion schemes, we get the following results. Figure.4(a) & Figure.4(b) are the source images, Figure.4(c) is the proposed method, Figure.4(d) based on SWT, Figure.4(e) based on PCA method, Figure.4(e) based on morphological pyramid. The experimental results for these methods are tabulated below.

    Fusion Method

    PSNR

    Average

    23.5670

    Select Maximum

    26.7963

    Morphological pyramid

    26.9812

    PCA method

    53.2370

    SWT fusion

    59.1387

    Gradient Image Fusion

    67.0832

    Table1. Statistic results of different fusion methods

    The Graph below is plotted in MS Excel. The graph shows the PSNR value of various fusion methods.

    80

    70

    60

    50

    40

    30

    20

    10

    0

    PSNR

    80

    70

    60

    50

    40

    30

    20

    10

    0

    PSNR

    Fig. 5 PSNR values of various Image Fusion methods

  7. Conclusion

    This paper presents a hybrid approach of image fusion based on wavelets and gradient image for multi-focus images.This paper expresses the image fusion as an optimisation problem for which a solution is obtained by the proposed fusion method.

    The proposed method is successfully examined using a set of multi-focus optical as well as multi- focus medical (CT-scan) images. This hybrid method outperforms simple wavelet fusion method in preserving the image quality.

  8. References

  1. F. Sroubek, S. Gabarda, R. Redondo, S. Fischer and

    G. Cristobal, Multifocus Fusion with Oriented Windows Academy of Sciences, Pod vod´arenskouv¡e¡zý 4, Prague, Czech Republic; Instituto de ´ Optica, CSIC, Serrano 121, 28006 Madrid, Spain.

  2. MatejKristan, JanezPers, MatejPerse, StanislavKovaci., A Bayes-Spectral-Entropy-Based Measure of Camera Focus Using a Discrete Cosine Transform, Faculty of Electrical Engineering, University of Ljubljana, Trzaska 25, 1001 Ljubljana,

    Slovenia

  3. Chun-Hung Shen and Homer H. Chen, Robust Focus Measure for Low-Contrast Images.

  4. AamirSaeed Malik, Tae-Sun Choi, HumairaNisar, Depth Map and 3D Imaging Applications: Algorithms and Technologies, IGI Global, November 30, 2011.

  5. Huafeng Li , Yi Chai, Hongpeng Yin, Guoquan Liu , Multifocus image fusion and denoising scheme based on homogeneity similarity, Optics Communications 285 (2012) 91100

  6. Petrovi VS, Xydeas CS, Gradient-based multiresolution image fusion., IEEE Trans Image Process. 2004 Feb;13(2):28-37.

  7. Wilson T.A.., Rogers, S.K., and Myers, L.R,(1995)Perceptual based hyperspectral image fusion using multiresolution analysis. Optical Engineering.

  8. S. Zebhi, M. R. AghabozorgiSahaf, and M. T. Sadeghi, Image fusion using pca in cs domain, Signal & Image Processing : An International Journal (SIPIJ) Vol.3, No.4, August 2012.

  9. FilipSroubek, Gabriel Cristobal and Jan Flusser, Image Fusion Based On Level Set Segmentation Institute of Information Theory and Automation Academy of Sciences of the Czech Republic.

  10. RefaatYousef Al Ashi and Ahmed Al AmeriIntroduction to Graphical User Interface MATLAB 6.5 UAE University, College of Engineering.

  11. Chapman, Stephen J., MATLAB Programming for Engineers, Brooks Cole, 2001.

  12. Shutao Li , Xudong Kang, Jianwen Hu, Bin Yang, Image matting for fusion of multi-focus images in dynamic scenes, College of Electrical and Information Engineering, Hunan University, Changsha 410082,

    China

  13. Sangheeta Roy, PalaiahnakoteShivakumara, ParthaPratim Roy and Chew Lim Tan, Wavelet- Gradient-Fusion for Video Text Binarization, Tata Consultancy Services, Kolkata, India.

  14. Shih-Gu Huang, Wavelet for Image Fusion, Graduate Institute of Communication Engineering & Department of Electrical Engineering, National Taiwan University.

  15. Satya R. Chakravarty, Tirthankar Roy, Measurement of fuzziness: A general approach, Theory and Decision September 1985, Volume 19, Issue 2, pp 163-169

  16. Helmy A. Eltoukhy and Sam Kavusi, A Computationally Efficient Algorithm for Multi- FocusImage Reconstruction Department of Electrical Engineering Stanford University, 350 Serra Mall, Stanford, CA 94305.

  17. Arthur L. da Cunha, Jianping Zhou, Member, IEEE, and Minh N. Do, Member, IEEE, TheNonsubsampledContourletTransfor: Theory, Design, and Applications, IEEE transactions on image processing, vol. 15, no. 10, october 2006

  18. Jing Tianand Li Chen,Multi-Focus Image Fusion using Wavelet-Domain Statistics, Proceedings of 2010 IEEE 17th International Conference on Image Processing.

  19. XiaoboQu, Jingwen Yan Multi-focus Image Fusion Algorithm Based on Regional Firing Characteristic of Pulse Coupled Neural Networks.

Leave a Reply