 Open Access
 Total Downloads : 1069
 Authors : G.Rahul, R.Kalpana
 Paper ID : IJERTV1IS8310
 Volume & Issue : Volume 01, Issue 08 (October 2012)
 Published (First Online): 30102012
 ISSN (Online) : 22780181
 Publisher Name : IJERT
 License: This work is licensed under a Creative Commons Attribution 4.0 International License
Blockbased Feature Multi Level Multi Focus Image Fusion on Neural Network
G.Rahul1,R.Kalpana2
1M.tech.Student, Sri Indu College of Engineering & Technology, Hyderabad
2Associate Professor, Sri Indu College of Engineering & Technology, Hyderabad
Abstract
In this paper, blockbased multifocus image fusion has been proposed. In the past it is impossible to obtain an image in which all the objects are in focus. Image fusion deals with creating an image in which all the objects are in focus. Thus it plays an important role to perform other tasks of image processing such as image segmentation, edge detection, stereo matching and image enhancement. In blockbased multifocus fusion technique follow three steps. Firstly, ten pairs of images divided in to MÃ—N block. Second, Extract the features of each block and feed forward to neural network. Last step is the trained neural network is then used to fuse any pair of multifocus images. The experimental results shows that the DWT has better performance than exiting techniques in terms of measurement analysis and image fusion image.
Keywords Fusion, Optimal Block, NN, image fusion

INTRODUCTION
In computer vision, Multisensor Image fusion is the process of combining relevant information from two or more images into a single image. The resulting image will be more informative than any of the input images. In remote sensing applications, the increasing availability of space borne sensors gives a motivation for different image fusion algorithms. Several situations in image processing require high spatial and high spectral resolution in a single image. Most of the available equipment is not capable of providing such data convincingly. The image fusion techniques allow the integration of different information sources. The fused image can have complementary spatial and spectral resolution characteristics. But, the standard
image fusion techniques can distort the spectral information of the multispectral data, while merging.
Image fusion is generally performed at three different levels of information representation including pixel level, feature level and decision level [2]. A number of image fusion techniques have been presented in the literature. In addition of simple pixel level image fusion techniques, we find the complex techniques such as multi resolution approach [1], Laplacian Pyramid [3], fusion based on PCA [4], discrete wavelet (DWT) based image fusion [5], Neural Network based image fusion [6] and advance DWT based image fusion [7]. These techniques have different merits and demerits such as linear wavelets like Haar wavelet during the image decomposition does not preserve the original data [8]. Similarly due to lowpass filtering process of wavelets, the edges in the image become smooth and hence the contrast in the fused image is decreased.
The paper is organized as follows. Flow chart of proposed method is presented in Section II. Section III describes the 2level DWT. Section IV describes the image quality measurements. Results are shown in section V. Section VI describes the conclusion.

FLOW CHART OF BLOCKBASED MULTI FOCUS IMAGE FUSION
In fig 1, i1,i2,..in represents the n different images. Each block briefly explained as shown.

For every image we create two versions of same size .In the first version some of the regions are randomly selected in the left half of the image and are blurred. Similar process is performed in the right half of the image in
the second version. Every image is divided into MÃ—N blocks. It has distinguishing the blurred and unblurred regions from each other and then apply the genetic algorithm for every image.

Calculate the feature values of each block of all the images and Normalize the feature values between [0 1].

Assign the class value to every block j of i th image. If block j is visible then assign it class value 1 otherwise give it a class value – 1.

Create a neural network with adequate number of layers and neurons. Train the newly created neural network with adequate number of patterns selected from features file created in step a.

By using the trained neural network, identify the clearness of all the blocks of any pair of multifocus images to be fused.

Fuse the given pair of multifocus images block by block according to the classification results of the neural network such that
Start
The procedure of feature selection is the important key role in multi focus image fusion. In multi focus images , some of the objects are in focus and some of the objects are out focus. The blurred objects in an image reduce its clearness. We have used five different features to characterize the information level contained in a specific portion of the image. This features set includes Variance, Energy of Gradient, Contrast Visibility, Spatial Frequency and Canny Edge information. But energy of gradient and spatial frequency are reduce the blurred objects in an images compare to contrast visibility. So, in this paper, energy of gradient and spatial frequency are used for feature selection.
The energy of gradient of image block is given by
Where
M=number of rows of image block N=number of columns of image block X = block size of every image
Spatial frequency of image block is given by
SF CF 2 RF 2
Create left & right focus img
Where
Divide MÃ—N block size
RF
1 m n
[I (i, j)I (i, j
1)]2
m* N i 1 j 1
Feature selection
CF 1 m n [I (i, j)
I (i
1, j)]2
Apply neural nwt
m* N i 1 j 1
Fused images
end
Fig.1. flowchart of proposed method
A large value of spatial frequency describes the large information level in the image and therefore it measures the clearness of the image.


2LEVEL DISCRETE WAVELET TRANSFORM
The wavelet transform transforms the image into a multiscale representation with both spatial and
MI 1 m n [h(i, j)]log 2[
M * N i 1 j 1
h(i, j) ]
h(i, j)T (i, j)
frequency characteristics. In this paper, DWT take as frequency domain. It has better performance (RMSE,PSNR) than block based feature level image fusion. The procedure of image fusion using DWT as shown below
Read the original image of left and right blurred images
Apply 2level DWT for original image to get 16 sub bands.
Compute feature selection and then apply the neural network.
Apply inverse DWT of neural network.
III.IMAGE QUALITY MEASUREMENTS
There are different quantitative measures which are used to evaluate the performance of the fusion techniques. We used three measures Root Mean Square Error (RMSE), Peak Signal to Noise Ration (PSNR), Mutual Information (MI), entropy and standard deviation.
The RMSE of reference image R and fused images F is given by
The table 1 shows the measurement values of MSE, PSNR and MI of relative images.

EXPERIMENTAL RESULTS
Original image
Fig.2.original image
Left blur image
Fig.3.left blur image
RMSE
1 m n
[R(i, j)F (i, j)]2
Right blur image
M * N i 1 j 1
The PSNR of reference image R and fused images F is given by
PSNR
20 log10 1
L2
m n
[R(i, j)F (i, j)]2
Fig.4.right blur image
M * N i 1 j 1
Where L is the number of gray levels
The MI of fused image T and reference image h is given by
Fused imag
Fig.5.fused by a proposed method Table.1.results of quantitative measures
RMSE
1.7004
1.6985
PSNR
21.76
22.58
MI
0.9314
0.8756
RMSE
1.6481
1.6321
PSNR
21.94
22.67
MI
0.9432
0.8823

CONCLUSION
In this, we have presented a blockbased multi focus image fusion. The results are better performance and image quality is good. In this technique, the computation time and cost is less. In the proposed technique, only one neural network is created whereas in PNNbased image fusion , neural network for every pair of multifocus images is created which is really time consuming.

REFERENCES

Ishita De and Bhabatosh Chanda, A simple and efficient algorithm for multifocus image fusion using morphological wavelets in Signal Processing. pp. 924936, 2006.

Gonzalo Pajares and Jesus Manuel de la Cruz, A waveletbased Image Fusion Tutorial in Pattern Recognition, vol 37, no. 9, pp. 18551872, 2004.

A. Toet, Image fusion by a ratio of low pass pyramid in Pattern Recognition Letters, vol. 9, no. 4, pp. 245253, 1989.

V.P.S. Naidu and J.R.Raol, Pixellevel Image Fusion using Wavelets and Principal Component
Analysis in Defence Science Journal, vol. 58, no. 3, pp. 338352, May 2008.

H.Li, S. Manjunath and S.K. Mitra, Multi sensor image fusion using the wavelet transform in Graphical Models and Image Processing, vol. 57, no.3, pp. 235245, 1995.

Give reference here

Yufeng Zheng, Edward A. Essock and Bruce C. Hansen, An Advanced Image Fusion Algorithm Based on Wavelet Transform Incorporation with PCA and morphological Processing in Proceedings of the SPIE,
vol 5298, pp. 177187, 2004.

H.J. Heijmans and J. Goutsias, Multiresolution signal decomposition schemes, Part 2: morphological wavelets, in IEEE Trans. Image Processing 9, pp. 18971913, November 2000.
Mr.G.RAHUL graduate from jaya prakash narayana college of engineering in Electronics& Communications. Now pursuing Masters in Digital Electronics and Communication Systems (DECS) from Sri Indu College of Engineering & Technology.
I express my gratitude to R.KALPANA Associate Professor Department of (ECE) and for her constant co operation, support and for providing necessary facilities throughout the M.tech program. She has 4 Years of Experience, at B.Tech and M.tech Level and working as a Associate Professor in Sri Indu College of Engg. & Technology