 Open Access
 Total Downloads : 318
 Authors : Vishnupriya G. L. , K. S. Srinivas
 Paper ID : IJERTV3IS070573
 Volume & Issue : Volume 03, Issue 07 (July 2014)
 Published (First Online): 16072014
 ISSN (Online) : 22780181
 Publisher Name : IJERT
 License: This work is licensed under a Creative Commons Attribution 4.0 International License
Implementation of Pixel Level Color Image Fusion for Concealed Weapon Detection
Vishnupriya G. L
Signal Processing and VLSI SBMJCE
Kanakapura, India
K. S. Srinivas
Asst. Prof., Dept. of ECE SBMJCE
Kanakapura, India
AbstractImage fusion has various applications in the field of military, law and enforcement. Image fusion for Concealed Weapon Detection (CWD) has attracted lots of interest in the field of military. In this paper linear pixel level image fusion has been implemented using vector valued total variation algorithm (VTVA). It includes decomposition of covariance matrix of multispectral bands using cholesky decomposition. The decomposed data is linearly transformed and mapped. The statistical properties of the resultant fused image can be controlled by the user.
KeywordsConcealed Weapon Detection(CWD), pixel level image fusion, VTVA

INTRODUCTION
Image fusion [4] is a process of merging two or more images taken from different sensors to form a fused image which is more informative and efficient compared to the source images. Its difficult to combine visual information simply by viewing multiple images separately by the human observer so we go image fusion. The pixel level image fusion performed on a pixelbypixel basis. It generates a fused image in which information associated with each pixel is determined. The performance of source image will be improved. In general pixel level fusion methods can be classified as linear and nonlinear methods.
In this paper the proposed algorithm is a linear pixel level image fusion. This method improves the visual quality of the fused image. The Concealed Weapon Detection (CWD) has been introduced in this paper, which detects the weapon or metal object hidden underneath a persons clothing. The research activities are going, on improving the quality of the fused image.

VECTOR VALUED TOTAL VARIATION ALGORITHM (VTVA)
Statistical properties of multispectral data set with X * Y pixel per channel and k different channels can explored if each pixel is described by a vector whose components are the individual spectral responses to each multispectral channel
with mean vector given by
The mean vector is used to define the average or expected position of the pixels in the vector space.

Covariance matrix
The correlation between the multispectral bands can be defined by covariance matrix. If the off diagonal elements in covariance matrix are large then we can say that the multispectral bands are correlated with each other and diagonal elements of the covariance matrix are the variance. The covariance matrix is real symmetric and positive definite matrix. The covariance CM can be obtained using equation
The correlation coefficient r is obtained by dividing the covariance matrix elements with the standard deviation of the corresponding multispectral component (rij=cij/ij). The correlation coefficient matrix RM has an elements the correlation coefficient between ith and jth multispectral components. The correlation coefficient matrix is shown below. The covariance matrix CM and CN are real and symmetric.
Fig .1. Block Diagram of VTVA
The covariance matrix CN can be obtained as the product of diagonal matrix and correlation coefficient matrix as below
Where E is the diagonal matrix and E. Scaling

Cholesky decomposition
The scaling scales the mapped image into the range [0 255]. The linearly transformed M must be scaled in order produce an RGB representation.
The CM and CN can be transformed into upper triangular matrix QM and QN respectively by making use of cholesky transformation matrix. The cholesky decomposition is applicable only for the real, symmetric and positive definite matrix. A real symmetric matrix P can be decomposed by means of upper triangular matrix Q so that
(7)
The factorized CM and CN can be written as

Transformation matrix
The transformation matrix transforms the fused multispectral components into a RGB or color image. The transformation matrix A depends on the statistical properties of the original data set. The transformation matrix A can be obtained using the cholesky decomposition method. The relation between the covariance matrices be
Where min (Nki) and max (Nki) are the minimum and maximum values of transformed vector Nk respectively.


SIGNAL TO NOISE RATIO
The signal to noise ratio (SNR) is calculated as the ratio of rms value of the reference input and the rms value of difference between reference input and mapped output image.
The SNR for the fused image at the output can be calculated by as below
(14)
Table Head
SNR Value Of Previous Paper
SNR Value Of Proposed System
12bit
77.20dB
72.8407dB
16bit
98.15dB
73.1117dB
TABLE I. SNR VALUE IN DECIBELS VERSUS BIT LENGTH OF THE FUSED IMAGE
substitute (8) and (9) in (10)
Thus,
The transformation matrix can be written as
The previous SNR values and propose method values for 12bit and 16bit are shown in Table.1.

EXPERIMENTAL RESULTS

Linear transformation
The linear transformation is to map the input and output components.
Fig .2. Natural color image
Fig .3. Infrared image
Fig .4. Fused image 1
ACKNOWLEDGMENT
This research work has been supported by my project guide Mr. K. S. Srinivas, Asst. Professor, Dept of ECE. I would like to thank my supervisors, friends and parents for their support.
REFERENCES

Dimitrios Besiris et.al. An FPGABased hardware implementation of configurable pixel level color image fusion, IEEE Tras. Geosci. Remote Sens., vol. 50, no.2, pp. 01962892, Feb 2012 .

V. Tsagaris and V. Anastassopoulos, Multispectral image fusion for improved RGB representation based on perceptual attributes, Int. J. Remote Sens., vol. 26, no. 15, pp. 32413254, Aug. 2005.

Mrityunjay Kumar, Member, IEEE, and Sarat Dass, Member, IEEE, A Total VariationBased Algorithm for PixelLevel Image Fusion,
Trans. image processing, vol. 18, no. 9, Sept 2009

A. Goshtaby and S. Nikolov, Image fusion: Advances in the state of the art, Inf. Fusion, vol. 8, no. 2, pp. 114118, Apr. 2007.

R. S. Blum and Z. Liu, Eds., MultiSensor Image Fusion and Its Applications (Special Series on Signal Processing and Communications). New York: Taylor & Francis, 2006.

T. Stathaki, Image Fusion: Algorithms and Applications,. New York: Academic, 2008.

Fig .5. Fused image 2
First a natural color image is taken, the RGB components are separated from color image. Next a gray scale image is taken where we can see the weapon hidden inside the clothing. This is shown in the first two figures. Last two images are fused images where the weapon hidden is clearly seen. The resultant fused images color is configurable. User can configure the color of the fused image.