Image Fusion For Concealed Weapon Detaction

DOI : 10.17577/IJERTV2IS2529

Download Full-Text PDF Cite this Publication

Text Only Version

Image Fusion For Concealed Weapon Detaction

Dagnew Yalemtsehay Gared1, a, Xuewen Ding2, b

1School of Electronic Information Engineering, Tianjin University of Technology and Education, China

2School of Electronic Engineering, Tianjin University of Technology and Education, China

Abstract

This paper presents an approach to image fusion for the detection of weapons concealed underneath a persons cloths (CWD) applications. Concealed weapon detection (CWD) is an increasingly important topic in the general area of law enforcement and it appears to be a critical technology for dealing with terrorism. It is very important to the improvement of the security of the public as well as the safety of public assets like airports, buildings, and railway stations etc. Manual screening procedure gives unsatisfactory results when the object is not in the range of security personnel and when there is an uncontrolled flow of people. The goal of this paper is to develop an automatic detection and recognition system of concealed weapons using visual image and a corresponding IR image by the help of Image fusion technology.

Keywords: Concealed weapon detection (CWD), Image fusion, IR image, visual image,

  1. Introduction

    To address the emerging threats from terrorists, there is a need to develop an efficient technique for heightened security requirements and law enforcement. Already used manual screening procedure sometimes gives wrong alarm indication, and fails when the object is not in the range of security personnel as well as when it is impossible to manage the flow of people through a controlled procedure. Hence, passengers with concealed objects may not be detected. Imaging systems with a radiation wavelength longer than 20 lm can penetrate clothing and thus have the potential to detect concealed weapons. The enabling sensing mechanisms being studied include infrared (IR), acoustic, millimeter wave (MMW), X-ray sensors and so on. Infrared images depend on the temperature distribution information of the target to form an image. Usually the theory followed here is that the infrared radiation emitted by the human body is

    absorbed by clothing and then re-emitted by it. In the IR image the background is almost black with little detail because of the high thermal emissivity of body. The weapon is darker than the surrounding body due to a temperature difference between it and the body (it is colder than human body). The visual image is a mental image that is similar to a visual perception. The resolution in the visual image is much higher than that of the IR image; it is nothing but a gray or RGB image that supports human visual perception. But there is no useful information on the concealed weapon in the visual image. The human visual system is very sensitive to colors. To utilize this ability if we apply this image with other image in fusion technique we get a better fused image that helps for detection. Several image fusion technologies have been applied in fusing visual image and IR image for CWD applications. These include; Color Image Fusion (Zhiyun and Rick, 2003), Dual Tree- Complex Wavelet Transform (DT- CWT) (Wang and Lu, 2009), Wavelet Transform (WT) (Yocky, 1995; Shu-long, 2002), Expectation Maximization-Covariance Intersection (EM-CI) (Siyue and Henry, 2009), Multiscale-Decomposition based (MDB) fusion method (Zhang and Rick, 1999) etc. Wavelet Decomposition is also applied in image fusion in recent years and better results are obtained (Jorge et al., 1999; Yonghyun et al., 2011). A Multiscale-Decomposition-Based (MDB) fusion method is applied for CWD application in this study.

  2. Image Fusion Method

    The most important issue concerning image fusion is to determine how to combine the sensor images which is called fusion techniques/methods. Fusion techniques/methods include the simplest method of pixel averaging to more complicated methods such as principal component analysis and wavelet transform fusion. Several approaches to image fusion can be distinguished, depending on whether the images are

    fused in the spatial domain or they are transformed into another domain, and their transforms fused. In this section we briefly describe Multiscale- decomposition-based (MDB) fusion methods especially Discrete Wavelet Transform (DWT) Fusion.

  3. Multiscale-Decomposition-Based (MDB) Methods

    Multiscale-decomposition-based (MDB) fusion methods consist of three main steps. First, each source image is decomposed into a multiscale representation using a multiscale transform. Then a composite multiscale representation is constructed from the source representations and a fusion rule. Finally the fused image is obtained by taking an inverse multiscale transform of the composite multiscale representation. The difference between each method is in the particular multiscale representation and fusion rule employed.

    1. Wavelet Transform Fusion

      The most common form of transform image fusion is wavelet transform fusion. In common with all transform domain fusion techniques the transformed images are combined in the transform domain using a defined fusion rule then transformed back to the spatial domain to give the resulting fused image. Wavelet transform fusion is more formally defined by considering the wavelet transforms of the two registered input images I1(x, y) and I2(x, y) together with the fusion rule . Then, the inverse wavelet transform 1 is computed, and the fused image I(x, y) is reconstructed:

      I(x, y) = 1(((I1(x, y)), (I2(x, y)))).

      This process is depicted in figure 1

      Figure 1: Fusion of the wavelet transforms of two images.

    2. Discrete Wavelet Transform (DWT)

      Fusion.

      The basic idea of all multiresolution fusion schemes is motivated by the human visual system being primarily sensitive to local contrast changes, e.g. the edges or corners. In the case of wavelet transform fusion all respective wavelet coefficients from the input images are combined using the fusion rule . Since wavelet coefficients having large absolute values contain the information about the salient features of the images such as edges and lines, a good fusion rule is to take the maximum of the [absolute values of the] corresponding wavelet coefficients. A more advanced area based selection rule is proposed. The maximum absolute value within a window is used as an activity measure of the central pixel of the window. A binary decision map of the same size as the DWT is constructed to record the selection results based on a maximum selection rule. A similar method was suggested by Burt and Kolczynski. Rather than using a binary decision, the resulting coefficients are given by a weighted average based on the local activity levels in each of the images sub bands. Another method called contrast sensitivity fusion is given. This method uses a weighted energy in the human perceptual domain, where the perceptual domain is based upon the frequency response, i.e. contrast sensitivity, of the human visual system. This wavelet transform image fusion scheme is an extension to the pyramid based scheme described by the same authors. Finally, a recent publication by Zhang and Blum provides a detailed classification and comparison of multiscale image fusion schemes.

      Implemented Fusion Rules;

      • Maximum Selection (MS) scheme: This simple scheme just picks the coefficient in each sub band with the largest magnitude;

      • Weighted Average (WA) scheme: This scheme developed by Burt and Kolczynski uses a normalized correlation between the two images sub bands over asmall local area. The resultant coefficient for reconstruction is calculated from this measure via a weighted average of the two images coefficients;

      • Window Based Verification (WBV) scheme: This scheme developed by Li et al. creates a binary decision map to choose between each pair of coefficients using a majority filter.

  4. Experimental Results and Analysis

    A good fusion algorithm should preserve or enhance all the useful features from the source images, not introduce artifacts or inconsistencies which will distract human observers or the following processing, and eliminate noise and provide robustness against

    registration errors. In our current work, we are interested in using image fusion to help a human or computer in detecting a concealed weapon using IR and visual sensors.

    The experimental results of visual and IR image fusion based on the Multiresolution Wavelet Decomposition are illustrated in the figures below. Figure 2 and 3 shows the original color visual image and the gray-level visual image respectively. These two images resolution is high but the obscured weapon is invisible. Figure 4 shows the IR image. In the IR image, the concealed weapon is detected and shown in dark intensity due to its low temperature compared to the human body. It should be noted that, some areas of the background are also shown in low intensity (having similar gray color as the region of interest) because of their low thermal emissivity.

    Figure 2: RGB image

    Figure 3: Gray image

    Figure 4: IR image

    The above visual RGB image is not taken into consideration in our algorithms to find out concealed

    weapon. This part is for visual comparison between one dimension IR image and gray image.

    Because the two input images shown in figure 3 and figure 4 are taken from two different image sensing devices, they are of different size. Therefore we first resize these two types of images, because the image fusion and other operations are not possible if the sizes are not same.

    Figure 5: Fused image 1

    Figure 6: Complemented IR

    Figure 7: Fused image 2

    The fused result of visual image and IR image is shown in figure 5. Actually we want to detect the

    hidden details from figure 5 but image from figure 5 is not transparent, so we do not get enough information from figure 5. Complement the IR image which is useful in the next operation and this complement image is shown in figure 6. IR image intensity lies in the range 0 to 255 thus complement means subtracting all matrix components from 255 and we get complemented form or reverse form of the IR image. Lastly, fuse visual image and complemented IR image and the result will be as shown in figure 7 and its resolution seems close to original gray image and it provides concealed weapon information.

  5. Conclusion

    In this paper we proposed image fusion technique for CWD where we fused a visual gray image and IR image. We were be able to detect the weapon concealed under a persons clothes, but infrared radiation can be used to show the image of a concealed weapon only when the clothing is tight, thin, and stationary. For normally loose clothing, the emitted infrared radiation will be spread over a larger clothing area, thus decreasing the ability to detect a weapon. In our future work, we will try to improve the quality of the fused image and solve the above mentioned problem.

  6. References

  1. A. Al-Qubaa, G. Y. Tian: Weapon Detection and Classification Based on Time-Frequency Analysis of Electromagnetic Transient Images, International Journal on Advances in Systems and Measurements, vol 5 No. 3 & 4, pp89-99, 2012.

  2. McMillan RW, O Milton J, Hetzler MC, Hyde RS, Owens WR (2000): Detection of concealed weapons using far-infrared bolometer arrays. In: Conference digest on 25th infrared and millimeter waves, pp 259260

  3. Slamani MA, Ramac L, Uner M, Varshney P, Weiner DD, Alford M, Derris D, Vannicola V (1997): Enhancement and fusion of data for concealed weapons detection. In: SPIE, Vol. 3068, pp 2025

  4. Burt PJ, Kolczynski RJ (1993): Enhanced image capture through fusion. In: Proceedings of 4th international conference on image processing, pp 248251

  5. Yu-Wen Chang; Michael Johnson: Portable Concealed Weapon Detection Using Millimeter Wave FMCW Radar Imaging. Federal funds provided by the U.S. Department of Justice August 30, 2001

  6. Z. Xue, R. S. Blum, and Y. Li: Fusion of Visual and IR Images for Concealed Weapon Detection1. U.

    S. Army Resear. ch Office under grant number DAAD19-00-1-0431, pp 1198-1205.

  7. Sathya Annadurai and V. Vaithiyanathan: Concealed Weapon Detection Using Multiresolution Additive Wavelet Decomposition. Research Journal of Applied Sciences, Engineering and Technology 4(20): 4118-4121, 2012

  8. Zhiyun Xue, Rick S. Blum: Concealed Weapon Detection Using Color Image Fusion. ISIF, pp-622- 627, 2003.

  9. Zheng Liu, Zhiyun Xue , Rick S. Blum & Robert Laganie`re: Concealed weapon detection and visualization in a synthesized image : Pattern Anal Applic (2006) 8: 375389

  10. H. Li, B. Manjunath and S. Mitra: Multisensor image fusion using the wavelet transform, Graphical Models and Image Processing, vol.57, pp.235-245, May 1995.

  11. A. Agurto, Y. Li, G. Y. Tian, N. Bowring, and S. Lockwood: A Review of Concealed Weapon Detection and Research in Perspective, Proceedings of the 2007 IEEE International Conference on Networking, Sensing and Control, London, pp. 443- 448, 2007.

  12. D. M. Sheen, D. L. McMakin, and T. E. Hall: Three-Dimensional Millimetre-Wave Imaging for Concealed Weapon Detection, IEEE transactions on microwave theory and techniques, vol. 49, pp. 1581- 1592, 2001.

  13. K. Amolins, Y. Zhang, and P. Dare: Wavelet Based Image Fusion Techniques: An Introduction, Review and Comparison, ISPRS Journal of Photogrammetry and Remote Sensing, vol. 62, pp. 249-263, 2007.

  14. Klock BA (2003): Interface and usability assessment of imaging systems. IEEE AESS Syst. Mag 18(3):1112

Leave a Reply