Block-based Feature Multi Level Multi Focus Image Fusion on Neural Network

DOI : 10.17577/IJERTV1IS8310

Download Full-Text PDF Cite this Publication

Text Only Version

Block-based Feature Multi Level Multi Focus Image Fusion on Neural Network

G.Rahul1,R.Kalpana2, Sri Indu College of Engineering & Technology, Hyderabad

2Associate Professor, Sri Indu College of Engineering & Technology, Hyderabad


In this paper, block-based multi-focus image fusion has been proposed. In the past it is impossible to obtain an image in which all the objects are in focus. Image fusion deals with creating an image in which all the objects are in focus. Thus it plays an important role to perform other tasks of image processing such as image segmentation, edge detection, stereo matching and image enhancement. In block-based multi-focus fusion technique follow three steps. Firstly, ten pairs of images divided in to M×N block. Second, Extract the features of each block and feed forward to neural network. Last step is the trained neural network is then used to fuse any pair of multi-focus images. The experimental results shows that the DWT has better performance than exiting techniques in terms of measurement analysis and image fusion image.

Keywords- Fusion, Optimal Block, NN, image fusion


    In computer vision, Multisensor Image fusion is the process of combining relevant information from two or more images into a single image. The resulting image will be more informative than any of the input images. In remote sensing applications, the increasing availability of space borne sensors gives a motivation for different image fusion algorithms. Several situations in image processing require high spatial and high spectral resolution in a single image. Most of the available equipment is not capable of providing such data convincingly. The image fusion techniques allow the integration of different information sources. The fused image can have complementary spatial and spectral resolution characteristics. But, the standard

    image fusion techniques can distort the spectral information of the multispectral data, while merging.

    Image fusion is generally performed at three different levels of information representation including pixel level, feature level and decision level [2]. A number of image fusion techniques have been presented in the literature. In addition of simple pixel level image fusion techniques, we find the complex techniques such as multi resolution approach [1], Laplacian Pyramid [3], fusion based on PCA [4], discrete wavelet (DWT) based image fusion [5], Neural Network based image fusion [6] and advance DWT- based image fusion [7]. These techniques have different merits and demerits such as linear wavelets like Haar wavelet during the image decomposition does not preserve the original data [8]. Similarly due to low-pass filtering process of wavelets, the edges in the image become smooth and hence the contrast in the fused image is decreased.

    The paper is organized as follows. Flow chart of proposed method is presented in Section II. Section III describes the 2-level DWT. Section IV describes the image quality measurements. Results are shown in section V. Section VI describes the conclusion.


    In fig 1, i1,i2, represents the n different images. Each block briefly explained as shown.

    1. For every image we create two versions of same size .In the first version some of the regions are randomly selected in the left half of the image and are blurred. Similar process is performed in the right half of the image in

      the second version. Every image is divided into M×N blocks. It has distinguishing the blurred and un-blurred regions from each other and then apply the genetic algorithm for every image.

    2. Calculate the feature values of each block of all the images and Normalize the feature values between [0 1].

    3. Assign the class value to every block j of i th image. If block j is visible then assign it class value 1 otherwise give it a class value – 1.

    4. Create a neural network with adequate number of layers and neurons. Train the newly created neural network with adequate number of patterns selected from features file created in step a.

    5. By using the trained neural network, identify the clearness of all the blocks of any pair of multi-focus images to be fused.

    6. Fuse the given pair of multi-focus images block by block according to the classification results of the neural network such that


      The procedure of feature selection is the important key role in multi focus image fusion. In multi focus images , some of the objects are in focus and some of the objects are out focus. The blurred objects in an image reduce its clearness. We have used five different features to characterize the information level contained in a specific portion of the image. This features set includes Variance, Energy of Gradient, Contrast Visibility, Spatial Frequency and Canny Edge information. But energy of gradient and spatial frequency are reduce the blurred objects in an images compare to contrast visibility. So, in this paper, energy of gradient and spatial frequency are used for feature selection.

      The energy of gradient of image block is given by


      M=number of rows of image block N=number of columns of image block X = block size of every image

      Spatial frequency of image block is given by

      SF CF 2 RF 2

      Create left & right focus img


      Divide M×N block size


      1 m n

      [I (i, j)

      I (i, j


      m* N i 1 j 1

      Feature selection

      CF 1 m n [I (i, j)

      I (i

      1, j)]2

      Apply neural nwt

      m* N i 1 j 1

      Fused images


      Fig.1. flowchart of proposed method

      A large value of spatial frequency describes the large information level in the image and therefore it measures the clearness of the image.


    The wavelet transform transforms the image into a multi-scale representation with both spatial and

    MI 1 m n [h(i, j)]log 2[

    M * N i 1 j 1

    h(i, j) ]

    h(i, j)T (i, j)

    frequency characteristics. In this paper, DWT take as frequency domain. It has better performance (RMSE,PSNR) than block based feature level image fusion. The procedure of image fusion using DWT as shown below

    Read the original image of left and right blurred images

    Apply 2-level DWT for original image to get 16 sub bands.

    Compute feature selection and then apply the neural network.

    Apply inverse DWT of neural network.


    There are different quantitative measures which are used to evaluate the performance of the fusion techniques. We used three measures Root Mean Square Error (RMSE), Peak Signal to Noise Ration (PSNR), Mutual Information (MI), entropy and standard deviation.

    The RMSE of reference image R and fused images F is given by

    The table 1 shows the measurement values of MSE, PSNR and MI of relative images.


    Original image

    Fig.2.original image

    Left blur image

    Fig.3.left blur image


    1 m n

    [R(i, j)

    F (i, j)]2

    Right blur image

    M * N i 1 j 1

    The PSNR of reference image R and fused images F is given by


    20 log10 1


    m n

    [R(i, j)

    F (i, j)]2

    Fig.4.right blur image

    M * N i 1 j 1

    Where L is the number of gray levels

    The MI of fused image T and reference image h is given by

    Fused imag

    Fig.5.fused by a proposed method Table.1.results of quantitative measures




















    In this, we have presented a block-based multi focus image fusion. The results are better performance and image quality is good. In this technique, the computation time and cost is less. In the proposed technique, only one neural network is created whereas in PNN-based image fusion , neural network for every pair of multi-focus images is created which is really time consuming.


  1. Ishita De and Bhabatosh Chanda, A simple and efficient algorithm for multifocus image fusion using morphological wavelets in Signal Processing. pp. 924-936, 2006.

  2. Gonzalo Pajares and Jesus Manuel de la Cruz, A wavelet-based Image Fusion Tutorial in Pattern Recognition, vol 37, no. 9, pp. 1855-1872, 2004.

  3. A. Toet, Image fusion by a ratio of low pass pyramid in Pattern Recognition Letters, vol. 9, no. 4, pp. 245-253, 1989.

  4. V.P.S. Naidu and J.R.Raol, Pixel-level Image Fusion using Wavelets and Principal Component

    Analysis in Defence Science Journal, vol. 58, no. 3, pp. 338-352, May 2008.

  5. H.Li, S. Manjunath and S.K. Mitra, Multi- sensor image fusion using the wavelet transform in Graphical Models and Image Processing, vol. 57, no.3, pp. 235-245, 1995.

  6. Give reference here

  7. Yufeng Zheng, Edward A. Essock and Bruce C. Hansen, An Advanced Image Fusion Algorithm Based on Wavelet Transform Incorporation with PCA and morphological Processing in Proceedings of the SPIE,

    vol 5298, pp. 177-187, 2004.

  8. H.J. Heijmans and J. Goutsias, Multiresolution signal decomposition schemes, Part 2: morphological wavelets, in IEEE Trans. Image Processing 9, pp. 1897-1913, November 2000.

Mr.G.RAHUL graduate from jaya prakash narayana college of engineering in Electronics& Communications. Now pursuing Masters in Digital Electronics and Communication Systems (DECS) from Sri Indu College of Engineering & Technology.

I express my gratitude to R.KALPANA Associate Professor Department of (ECE) and for her constant co- operation, support and for providing necessary facilities throughout the program. She has 4 Years of Experience, at B.Tech and Level and working as a Associate Professor in Sri Indu College of Engg. & Technology

Leave a Reply