Survey on Flame Detection Technologies using Videos

DOI : 10.17577/IJERTV4IS070250

Download Full-Text PDF Cite this Publication

Text Only Version

Survey on Flame Detection Technologies using Videos

Indu Seethanathan

Computer Science Department Jawaharlal College of Engineering and Technology

Kerala, India

Abstract Detecting fire has been one of the oldest issues and there have been a lot of techniques that were used in order to control or eliminate fire. There has been a rapid development and improvement in the image processing technology, which lead to wider use of video application. Vision based detection techniques have become popular in the past decade. This technique was used in surveillance systems to provide security in places like banks, to monitor traffic, etc. Video camera covers a wider range than traditional sensors. A lot of information can be extracted from a video image. Consecutive images from a video help to understand the scenes and changes in the objects over a period of time. Even though traditional sensors like smoke and heat detectors worked well for indoor flame detections, these point sensors were not applicable for open spaces. This is a brief survey of different fire detection systems that use videos.

Keywords Fire detections, flame detections, video based detection, video image

  1. INTRODUCTION

    It is vital to detect fire at the right time. Controlled fires like those in fireplaces and candles do not pose a threat if one is being careful. However, uncontrolled fire, if not careful can be fatal and cause huge damage to life and property. A lot of techniques are employed to detect early fire. However, one of the relatively new methods involves detecting flame from a video. Some vision based techniques may consider the color of the objects while other methods involve motion detection. Vision based detection has three steps, namely, preprocessing, feature extraction and classification. Preprocessing as the name suggests, is preparing the system for processing, i.e., here it converts the video into image frames and performs the required transformation. Feature extraction step detects the required target. The classification algorithms use the calculated features as input and produce an output on the basis of the presence of a target.

  2. FLAME DETECTION TECHNIQUES

When an uncontrolled fire breaks out, two prominent features that appear are smoke and flame. However smoke cannot be detected at night or in any closed warehouse if there are no proper lightings, so smoke can be detected only if there are favorable lighting conditions. So in this survey, only flame detection systems are considered.

  1. A System for Real Time Fire Detection

    In this paper, an algorithm is proposed to detect fire on the basis of spatial, spectral and temporal characteristics of fire. A grid is formed from a video image and the possible fire

    pixels are called as the connected components. By using this technique, the growth of the flame can be determined from the previous data. This technique is purely color based and it uses the properties obtained from a lot of video data.

  2. Computer Vision Based Methodnfor Real Time Fire And Flame Detection

    By studying videos in the wavelet domain, it becomes possible to detect flames. Spatial wavelet analysis detects the high frequency nature on the image contour and also within the fire region. As there is no change in the fire colored pixel value, the wavelet coefficient value does not change for a moving fire colored object. Variations of color in fire regions are detected by calculating the spatial wavelet transform.

  3. Fire Detection Using Statistical Color Model In Video Sequence

    This technique detects a fire in real time and the color pixel of the fire is combined with foreground object information. Three Gaussian distributions are used, each corresponding to pixel statistics in the required color channel. By performing statistical analyses on sample images containing fire pixels, a generic fire model is constructed. Each image has red, green and blue components. The first step in the algorithm is the background subtraction. The algorithm detects the foreground object. If the object is having the color of the fire, it is grouped into blobs. A time analysis is done on the blob and if the blobs center location and size changes, then the blob might be a fire candidate.The initial step of the algorithm is the background subtraction process. Changes detected in this step are then sent for color verification process. Each blob is covered by a rectangular guard area and notices the nature and behavior of the blob within the guard area. This helps in determining if the object is a flame or not. However, if there is a sudden change in the lighting conditions, there will be an error in the detection.

  4. Real Time Flame Detection Based On Color Context

    This detection algorithm selects fire area from a video frame by using optical flame feature area network and dynamic feature row network. In this paper, an algorithm is designed to detect fire via video and it is called Color Context Analysis based Real-time Flame Detection Algorithm (CCAFDA). Instead of considering the pixel features of all the pixels in each frame, it only considers flame feature are for each and every frame. This reduces the computational costs because

    the frame pixels are scanned selectively. In order to function the algorithm properly, it is important that the flame should be stable and that the flame should not be extinguished abruptly. The idea of CCAFDA algorithm is to initially obtain the flame region. To do that, the very first frame is scanned and if a frame region is detected, then from the next frame onwards only the detected region is scanned. For each frame the fire region area is determined.

  5. Fire Detection In Video Sequence Using Generic Color Model

    In this paper, YCbCr color space is used to distinguish chrominance from luminance. This provides better performance and deals with the effects of changing lighting situations. Instead of modeling fire intensity, chrominance models color of the fire. Y, Cb and Cr, stand for luminance, chrominance blue and chrominance red components respectively. The luminance of the flame should be more than chrominance blue and the chrominance blue should be less than chrominance red. However, this method does not take into account the flickering nature of fire.

  6. Probabilisic Approach For Visual Fire Detection

    In this paper, a probabilistic model is used to detect flames. This technique is used for surveillance and this paper aims at detecting the presence of flame in the video. To detect fire, the features are examined between consecutive frames. A possible flame region is detected first, and then the features are extracted from this candidate region. The randomness of the area of the fire region is more when compared to non fire region. The fire region has more surface coarseness because of the fast changes in the pixel value of the flame region.

  7. A Novel Way To Detect Fire In Videos With HMM

    In this paper, the main focus is in such an environment where there is irregular fire. In this method, first a candidate region is detected based on motion and fire colored pixel region. Background subtraction is done to separate foreground objects from rigid motion and also to obtain areas having motion. All these steps are done in RGB color space. After obtaining the candidate region, the control points are arranged in order to understand the flickering nature of the flame. These control points are available at the contour of the candidate region. A threshold value is set and by checking the value of control pints, one can say if the points belong to the fire area or not. The next step is the feature extraction method. Finally, there is the flame flickering model by HMM. The features obtained from the previous step are used to train the parameters of the HMM. From the contour of the candidate region, each central point is recognized and a threshold value is obtained that helps to distinguish fire from non fire points. In this paper, color, motion and flicker property are taken into consideration.

  8. Optical Flow Estimation For Flame Detection In Videos

    Two optical flow techniques are designed particularly to detect fire, namely, Optimal Mass Transport (OMT), to deal with the fire having the dynamic texture and Non Smooth data (NSD), to deal with the saturated flames. In this paper, the RGB frames are first converted into scalar valued frames.

    In this transformation, fire color will have higher weight value in the scalar valued frame. To calculate the optical flow, the scalar valued frame is used. Since OMT detects dynamic nature of the fire, it will not work under unfavorable lighting conditions, where the fire appears as saturated blob. NSD characterizes the boundary motion of the blob. In this method, four features are proposed where two features are used to analyze the direction of motion and the other two features measure the magnitude. The features measuring the magnitude will have greater values for moving flame object and the features analyzing the direction differentiate turbulence from rigid motion. The neural network is used to train data. The output of this technique is the probability of the feature vector being a fire or not.

  9. Fire detection based on flame color and area

    A new kind of technique was introduced where the method of extracting flame object was based on the threshold of the area. Initially, an adaptive threshold is generated by an iterative method which is then used to segment the image. Secondly, concept of set theory is used to extract the object contour. Finally, from the fire, flame color or fire motion characteristic information, it can judge whether fire occurs.

  10. Textural features for image classification

    Texture is one of the important characteristics used in identifying objects or regions of interest in an image, whether the image can be an aerial photograph, a photomicrograph, or a satellite image. Some easily computable textural features are described in this paper, based on the graytone spatial dependencies, and its application is also demonstrated in the identification tasks of three different types of image data. In each experiment the data set was divided into two parts, a training set and a test set. Test sets identification accuracy is 89 percent for the photomicrographs, for aerial photography, 82% accuracy is obtained, and for the satellite imagery 83 percent is the accuracy. The results indicate that the easily computable textural features probably have a general applicability for a wide variety of image-classification applications.

  11. Robust real-time object detection

This paper describes a visual object detection framework that is capable of processing images at an extreme speed while achieving high rate of detection. There are three important contributions. The very first contribution is the introduction of a new image representation, which is called the Integral Image. This image allows the features used by the detector to be computed very fast. The second contribution is a learning algorithm,which is based on AdaBoost. Adaboost selects a small number of critical visual features and produces extremely efficient classifiers. The third contribution involves a method that combines classifiers in a cascade that allows background regions of the image to be discarded quickly while spending more computation on promising object-like regions.

CONCLUSION

Video based systems used to detect flames are becoming very popular mainly because of much reduced false alarm rate. All the papers referred for this survey have used different techniques, methods and algorithms to detect fire. A lot of researchers have developed several techniques to detect fire via video and this survey analyses some selected video based flame detection techniques. Some papers have proposed simple techniques that detect using only color while other techniques are more complex. Obtaining video images, detecting motion and feature extraction are the hardest parts. Video based fire detection is a vast and interesting topic and has applications in a lot of fields.

REFERENCES

  1. Healey, Glenn, David Slater, Ted Lin, and Ben Drda, "A system for real-time fire detection, In Computer Vision and Pattern Recognition, 1993, Proceedings CVPR'93, 1993 IEEE Computer Society Conference

    on, pp. 605-606, IEEE, 1993.

  2. Töreyin, B. Uur, Yiithan Dedeolu, Uur Güdükbay, and A. Enis Cetin, "Computer vision based method for real-time fire and flame detection", Pattern recognition letters 27, no. 1, pp. 49-58, 2006.

  3. Celik, Turgay, Hasan Demirel, Huseyin Ozkaramanli, and Mustafa Uyguroglu, "Fire detection using statistical color model in video sequences", Journal of Visual Communication and Image

    Representation 18, no. 2, pp. 176-185, 2007.

  4. Li, Huan, Shan Chang, Zhe Li, and Lipng Shao, "Color context analysis based efficient real-time flame detection algorithm", In Industrial Electronics and Applications, ICIEA 2008, 3rd IEEE Conference on,

    pp. 1953-1957, IEEE, 2008.

  5. Celik, Turgay, and Hasan Demirel, "Fire detection in video sequences using a generic color model", Fire Safety Journal 44, no. 2, pp. 147- 158, 2009.

  6. Borges, Paulo Vinicius Koerich, and Ebroul Izquierdo, "A probabilistic

    approach for vision-based fire detection in videos", Circuits and

    Systems for Video Technology, IEEE Transactions on 20, no. 5, pp. 721-731, 2010.

  7. Ding, Jian, and Mao Ye, "A novel way for fire detection in the video using hidden markov model", In Electronic and Mechanical Engineering and Information Technology (EMEIT), 2011 International

    Conference on, vol. 9, pp. 4413-4416. IEEE, 2011.

  8. Mueller, Matthias, Peter Karasev, Ivan Kolesov, and Allen Tannenbaum, "Optical flow estimation for flame detection in

    videos", Image Processing, IEEE Transactions on 22, no. 7, pp. 2786- 2797, 2013.

  9. Wenhao Wang, Hong Zhou, Fire detection based on flame color and area, In 2012 IEEE International Conference on Computer Science

    and Automation Engineering (CSAE), vol. 3, pp. 222-226, 2012.

  10. Haralick Robert M., Karthikeyan Shanmugam, and Its' Hak Dinstein, "Textural features for image classification", In IEEE Transactions on Systems, Man and Cybernetics, vol. 3, no. 6, pp. 610-621, November 1973.

  11. Paul Viola, and Michael Jones, "Robust real-time object detection", In

International Journal of Computer Vision, vol. 4, pp. 51-52, 2001.

Leave a Reply