 Open Access
 Total Downloads : 1033
 Authors : C. Madhavi, G.Mamatha, B. N. Shobha
 Paper ID : IJERTV1IS4048
 Volume & Issue : Volume 01, Issue 04 (June 2012)
 Published (First Online): 30062012
 ISSN (Online) : 22780181
 Publisher Name : IJERT
 License: This work is licensed under a Creative Commons Attribution 4.0 International License
Image Texture features analysis for multifocus image fusion based on discrete wavelet transform
C. Madhavi1, G.Mamatha2, B. N. Shobha3
1Student, JNTU, 2 Faculty, JNTU, Anantapur, 3Faculty, MSRSAS, Bangalore
Abstract: An image fusion is the process by which 2 or more images are combined into a single image, retaining the important features from each of the original images. An image fusion methodology consists of two basic stages: image registration, which brings the input images to spatial alignment and fusion itself, i.e., combining image functions (intensities, colors, etc) in the area of frame overlap. An image fusion algorithm based on feature of image texture energy in wavelet domain is proposed. Firstly, the source images are decomposed to the highfrequency and lowfrequency components; Secondly, a process for detecting the texture energy of the frequency subband images is carried on, following which the wavelet fusion coefficients are obtained in terms of some rules; finally, fused image is produced by the implementation of inverse wavelet transform. Combination with visual evaluation, the standard deviation and information entropy were used for quantitative measure to evaluate the fusion results with different algorithms, the proposed algorithm is an effective image fusion algorithm.
Keywords: Multifocus image; image fusion; wavelet transform; texture analysis; texture feature

INTRODUCTION
Due to the restrictions on depth of field of the optical lenses, it is often unable for visible light imaging system to get a image, in which all targets (objects) are focused. In an image captured by those devices, only those objects within the depth of field are focused, while other objects are blurred. But the problem can be
solved by multifocus image fusion [11]. This technology can improve the utilization efficiency of the image information and facilitate understanding of human vision and computer, and widely used in machine vision and object recognition and other fields [2]. Image fusion can be separated into three levels: pixellevel image fusion, featurelevel image fusion and decisionlevel image fusion. In this paper, the proposed multi focus image fusion method based on wavelet transform [5] belongs to the pixel level image fusion.
Pixel level or signal level image fusion represents image fusion at the lowest level, where a number of raw input image signals are combined to produce a signal fused image signal. At pixel level, images are combined by considering individual pixel values or small arbitrary regions of pixels in order to make fusion decisions. Pixellevel algorithms work either in the spatial domain and or in the transform domain. Feature level image fusion is also known as object level image fusion. This method fuses feature, object labels and property descriptor information that have already been extracted from individual input images. This method produces a fused image reducing the amount of redundant information. Finally, the highest level, symbol or decision level image fusion represents fusion of probabilistic decision information obtained by local decision makers operating on the results of feature level processing on image data.
Aimed at the characteristics of multifocus image, An algorithm based on wavelet transform commonly used is the
weighted average method in the fusion process of lowfrequency domain, and the maximum value method, the regional maximum value method and regional energy method are the fusion methods used commonly in the fusion process of highfrequency domain. Among them, the weighted average method and the maximum value method are the fusion methods based on single pixel, which ignored the correlation between the pixel and the pixels adjacent to it, resulting in fusion with onesided effect; though the regional maximum value method and regional energy method consider the correlation between neighboring pixels, but it is deficient in measuring the salience of the texture and edge features. To some extent, those methods limited the further enhancement for fused image clarity. In light of the background information mentioned above, this article presents a texture energybased image fusion algorithm in wavelet domain, it not only considered the correlation between the pixel and its adjacent pixels, but also the image texture and edge features. Experimental results show that the proposed method, compared with the existing methods aforementioned, is effective for multifocus image fusion. This method can retain more visual meaningful information of the source image and can improve effectively image clarity.
The rest of the paper is organized as follows. Some concepts related with the proposed method and the proposed algorithm will be presented in section II. In section III, the experiments with different methods and comparison with other existing methods can be found. In the final section IV, the conclusion of whole paper is given.

IMAGE FUSION BASED ON WAVELET TRANSFORMATION
Wavelet transform is a kind of multiresolution analysis[4] technique that represents image variations at different
scales. The computation of wavelet transform of a 2D image involves recursive filtering and subsampling [8]. At each level we will obtain 3 high frequency subimages corresponding to horizontal, vertical and diagonal, as well as one lowfrequency subimage, i.e. LH (containing detail information from horizontal direction), HL (containing detail information from vertical direction), HH (containing detail information from diagonal direction) and LL (contains the main contour trait of the original image). As the above, the LL band will be decomposed, recursively. At the end we can obtain the subimages in each level.
In light of resemblance between multiresolution filter and human vision system, wavelet transform are often used for image application such as image segmentation, image retrieval, image fusion, etc.
The progress of image fusion waveletbased can be depicted as follows: firstly, wavelet transformation is carried out on each registered source image, and the wavelet transformation coefficients of the source images are obtained. Then, a fusion decision map which has the same size as the subimage on the same level is generated based on some defined fusion rules. The value of the fusion decision map is the indexes of coefficients which contain more interested information than other original images on the corresponding locations of the source images. Finally, the coefficients of the fused image are obtained by the fusion decision map, and the fused image will be gained by transforming back to the spatial domain.

TEXTURE FEATUREBASED IMAGE FUSION
The highfrequency subband and lowfrequency subband of the source image decomposed by wavelet have significant texture salience. For multi focus image, the higher texture salience[6] denotes the important visual meaningful information such as image texture and
edge. The multifocus images describing the same scene have the different texture salience in corresponding goals. The goals possessing the higher clarity in a source image have the larger texture salience than other source images in the same region. This kind of texture salience can be
Add the matrix to its transpose to make symmetrical
Normalize the matrix to turn it into probabilities
The calculation for symmetrical normalized GLCM is follows:
characterized by the texture feature.
Texture featurebased image fusion method is to select the coefficients from the source images with higher texture
p
i, j
Vi, j
N 1 Vi, j
i, j 0
saliency as the coefficients of the fused image.
Where p
i, j
is defined as the texture

Texture energy measurement
Image texture is a function of the spatial variation in pixel intensities(gray values). Texture calculated in two ways such as cooccurrence matrix method and runlength matrix method. In this paper,
measurement at location (i,j), vi, j is
defined as the each pixel in the GLCM. From that texture measurement we can calculate the texture features like:
p
N 1
we use cooccurrence matrix method[10].
An algorithm for texture feature extraction
Contrast =
p
i , j 0
i, j
N 1
(ij)^2
as follows:
Read the input image and store it in a buffer.
Requantize the input image to the
Dissimilarity =
Homogeneity =
i , j 0
N 1
i, j
i j
p
i, j
required gray level, in order to reduce computation time.
i , j 0 1 (i
j)^2
Set window size for feature calculation.
For each window around a pixel,
Angular second moment (ASM) and energy:
p
N
perform the following

Calculate the cooccurrence matrix (GLCM)
ASM =
i , j 0
^2
i, j

Calculate the feature value

Store it in the corresponding position
Energy = ASM
p p
N 1
Find the minimum and maximum value.
Entropy =
i, j 0
(ln
i, j
)
i, j
Normalize the feature value by intensity mapping from 0 to 255.
Display the resultant feature image.
Correlation =
i j
p
N 1 i j
i
j
The procedure for GLCM: Create framework matrix
Decide on the spatial relation
i, j 0
i, j
^2 ^2
between the reference and neighboring pixel


Fusion of wavelet coefficients
Count the occurrences and fill in the framework matrix
Let A (i,j) and B (i,j) be the
wavelet transform coefficient of the source
image A and B at location (i,j) respectively, then EA (i,j) and EB (i,j) will be obtained from the above energy equation.
To the wavelet coefficients, we adopt texture energy to determine the wavelet coefficients of the fused image.
Let F (i.j) be the wavelet transform coefficients of the fused image F at location (i,j).
F
(i, j) =
A (i, j) , if EA (i, j) EB (i, j)
Figure 1. original image blurred on left
B (i,j), if
EA (i, j) EB (i, j)
corner side
The step of image fusion is repeated at each pyramid sample position. Finally, the image is obtained by using the inverse wavelet transform.


SIMULATION RESULTS AND ANALYSIS
The multifocus source images carried on fusion process in this article are two degraded images those are blurred with a Gaussian smoothing kernel.Fig.1 and Fig.2 show the source images with blurring in the external region and central region respectively. The wavelet be used for wavelet decomposition and reconstruction is the db1 wavelet, and the number of wavelet decomposition layer is
3. In order to evaluate the effectiveness of the proposed algorithm, we take standard deviation and information entropy as the quantitative measures based on the subjective evaluation.
The source images and the results by different algorithms are shown from Fig.1 to Fig.11. Fig.1, Fig.2, Fig.3 and Fig.4 are source images. Fig.5, Fig.6, Fig.7, Fig.8, Fig.9, Fig.10 and Fig.11 are the fused image using weighted average method, maximum value method, texture contrast feature method, texture dissimilarity feature method, texture homogeneity feature method, texture ASM feature method, texture energy feature method,
Figure 2. original image blurred on middle left side
Figure 3. original image blurred on middle right side
Figure 4. original image blurred on right corner side
Figure 5.Weighted average method
Figure 6. Maximum value method
Figure 7. Texture Contrast feature method
Figure 8. Texture Dissimilarity feature method
Figure 9. Texture homogeneity feature method
Figure 10. Texture ASM feature method
Figure 11. Texture Energy feature method
The maximum value method, the regional maximum value method and the regional energy method carry on fusion in low frequency domain by abstracting from the weighted average of the transform coefficients of the two source images as the coefficients of the fused image, but in the highfrequency domain obtained the transformation coefficients of fused image by the rules based on the maximum value method, the regional maximum value method and regional energy method, respectively. The proposed method carries out the texture energy based image fusion in whole frequency domain.
Investigating Fig.7, Fig.8, Fig.9, Fig. 10 and Fig.11, we can see that the proposed algorithm outperform the weighted average method, maximum value method. Tab.1 shows the quantitative measure results of the all fusion methods. From the Tab.1, the proposed algorithm is better than other methods. The experimental results show that the fused image of the proposed algorithm can preserve more visual meaningful information than those of the other methods.
TABLE I. The quality Assessment of Different Fusion methods[1].
Performance metrics
Weighted average method
Maximum Value method
Texture contrast method
Texture ASM
method
Texture energy method
Texture homogeneity method
Texture dissimilarity method
Standard Deviation
14.4143
13.9791
19.9634
19.8041
19.9708
19.7899
19.7028
Information Entropy
6.9586
6.9699
6.9762
6.9817
6.9833
6.9918
7.0003

CONCLUSION
Due to the ignoring the correlation between the pixel and the pixels adjacent to it, image fusion based on the weighted average method and the maximum value method have onesided effect. Though the regional maximum value method and regional energy method consider the correlation between neighboring pixels,
but it is deficient to evaluate the texture salient of the images, limiting the further improvement of the clarity of fused image. The proposed algorithm in this article consider the correlation between the pixel and its neighboring pixels as well as texture salience in the all frequency sub band of the source images, the fused image has richer details and further improvement
in the overall clarity. The table 1 shows that the metric information entropy has been improved in the proposed method. As a future enhancement, the proposed method can be tried for fusion of medical images.
REFERENCES:

YueeLi, Yongfeng Han, Qiang Wang, An image fusion algorithm based on image texture energy feature, IEEE 2010.

Wang Ke, Ouyang Ning, Application of wavelet analysis in multifocus image fusion, Electronics Optics & control, vol.15, no.3, pp.6467,2008.

Yan Jingwen, Digital Image Processing, Beijing: National Defence Industry Press. Pp.180211,2007.

Na Yan, Jiao Licheng, image fusion methods based on multiresolution analysis, Xian: Xidian University Press, pp.114133,may.2007.

Zhou Wei, Guilin, MATLAB wavelet analysis advanced technology, Xian: Xidian University Press, Jan.2006.

Nunes JC, Guyot S, Delechelle E, Texture analysis based on local analysis of the bimensional empirical mode decomposition, Machine Vision an applications, vol.3, no.16, pp.177188, 2005.

Jia Yonghong, Digital Image Processing, Wuchang: Wuhan University Press, pp.178181, Sep.2004.9.

S.Mallat,A theory for multiresolution signal decomposition: the wavelet representation, IEEE trans on pattern analysis and machine intelligence, vol.11, no.7, pp.674693,2003.

L.Itti, C. Koch,etc, A model of saliencybased visul attention for rapid scene analysis, IEEE trans on pattern analysis and machine intelligence, vol.20,no.11, pp.12541259,2001. [10]www.fp.ucalgary.ca/mhallbery/tutorial
.htm
[11] Anjali Malviya, S. G. Bhirud, Wavelet based multifocus image fusion. Internal conference 2009 .