SVM Based Flame Detection using Optical Flow Detection

DOI : 10.17577/IJERTV4IS010783

Download Full-Text PDF Cite this Publication

Text Only Version

SVM Based Flame Detection using Optical Flow Detection

Madhuri R. Choutmahal Department of Engineering Sinhgad College of Engineering Pune, India

Pravin M. Kamde

Associate Professor, Department of Engineering Sinhgad College of Engineering

Pune, India

Abstract This paper presents an automatic system to detect flames in videos. This paper proposes a novel vision sensor- based flame or fire-detection method for an early-warning fire monitoring system. We all know, video sequences can provide more information related to objects and their scenarios than still images. When we deal with the video sequences, we must consider continuous image frames one after the other. In this particular paper, we have considered a finite set of motion features in images based on motion estimation using optical flow. The basic idea behind is to differentiate the fast, turbulent fire-motion with well-structured or rigid motion of other moving objects. First we detect candidate fire regions using adapted versions of previous related methods, such as Optical Mass Transport, which is used to detect the moving regions and flame or fire-colored pixels. Then,to remove non-fire pixels, the characteristic features of flow vector magnitudes along with their directions are extracted using method Non-Smooth Data. Support Vector Machine has been used for the classification purpose and has given the best results as compared to previously used classifiers.

Keywords Motion estimation, Optical flow, Video analysis, Flame Detection, SVM.

  1. INTRODUCTION

    Now-a-days, because of the increasing interest progress in video processing and computer vision technology, there are moreinexpensive digital video acquisition devices available in the market for surveillance. That means there will be more applications designed for the digital video. As we know, unlike a single image, the video sequences provide bonus information about how the objects as well astheir scenarios change over time, but they need increased space for the storage purpose as well as the wider bandwidth for the purpose of transmission.

    It is very crucial to detect the break-out of a fire so quickly for the prevention of the material dent along with human casualties. Even though, the traditional and conventional point-sensors can only detect the heat or just smoke particles, still they have relatively become successful for the indoor flame or fire detection. However, they fail to detect fire in larger open spaces, such as auditoriums, ships, or most probably in forests.

    Existing methods available for visual flame or fire detection depends on spectral examination. These approaches are still susceptible to false alarms caused due to the objects which have the same color as flame or fire, like sun and orange car. Toreyin et al. [4]used flame or fire-colored pixels in the moving regions inside a frame and a temporal or the spatial wavelet analysis. They showed the good results but they used a lot of heuristic thresholds. Phillips et al. [10]used the color-lookup table for the purpose of detecting the candidate fire regions as well as temporal variation in these particular candidate regions to validate ultimate fire regions.

    For more precise and reliable fire detection, we first detect the candidate fire regions by adapting the ideas from available preceding studies, such as detecting the moving regions in addition to flame or fire-colored pixels. Then, we remove non-fire pixels. The Support Vector Machine-SVM classifier is then used for the final flame or fire-pixel detection. It will give us the precision based on which we can conclude that is fire is there or not.Fig. 3 shows flame- detection system.

    Vision-based flame or fire-detection is consists of the following three steps. Preprocessing (1) is necessary to convert RGB frames to gray-scale frames. Feature extraction

    (2) is used for the extraction of features and detection of a specific object or a target; Classification (3) uses the calculated features as the input and to make conclusion outputs concerning the targets existence. The Supervised machine-learning-based classification algorithms like SVM are analytically trained on a dataset of extracted features as well as the ground truth.

    The remainder of this paper is organized as: Section II gives a brief summary of the previous related work. Section III gives details about the proposed work. And Section IV gives the details about the experimental conditionand finally Section V includes the conclusion about our work.

  2. RELATED WORK

    The opinion of the Computer vision concepts are usually inspired by the human vision. In [6], the proposed method analyses the frame-to-frame changes of particular low-level features like color, area size, boundary roughness, surface coarseness and the skewness describing the possible fire regions. Also, the flickering, which isa typical characteristic feature of the fire, has also been analyzed in these wavelet

    domain [3], [4]. In [7], they had developed a method which is suitable for both the smooth and turbulent, fast flames. Classical optical flow algorithms have been analyzed in [8] for the recognition of various dynamic textures.

    Liu et al. [5] used the shape, spectral, and temporal model of fire-regions in continuous image sequences. However, they assumed that fire regions are likely to have an unequal shape as time changes, yet the shape and size of moving objects can also vigorously change depending on the non-rigid movement. Chen et al. [6] used the RGB-HIS color model and the dynamic analysis of the flames that exactly matches the tangled characteristic of flames with the growth of pixels to detect fire.

    None of above authors had established the concept of motion estimation using optical flow as [1] and [2]. In [1], HornSchunck had developed a method for finding the optical flow pattern which assumes that the obvious velocity of the brightness pattern in the continuous image frames varies smoothly almost everywhere in the continuous image sequence. In [2], Lucas/Kanade has introduced the method which is used largely forthe optical flow estimation, assumes that the flow is effectively constant in a local neighborhood of the pixel in image frame, and it in additionsolves the basic optical flow equations for the entire pixels in that particular neighborhood.

  3. PROPOSED WORK

    As discussed in the Section I, flame or fire-detection in video sequences consists of three parts. Pre-processing (1), feature extraction (2) and lastly classification (3). They are as follows:

    1. Pre-processing:

      In the pre-processing step, first we have to divide the input video into the RGB image frames. Then, these RGB frames must be converted to the scalar-valued frames, weighting the high flame or fire-like colour pixels. They are gray-scale frames.

    2. Optical Flow Estimation:

    The input video may consist of single or most probably multiple "moving"objects. This means some objects or all the objects in the continuous image frames are in the motion.The Optical flow is nothing but the pattern of obviousmotion of these moving objects in the continuous image frames.

    Optical flow computation results intomotion estimation. That is the motion direction analysis aswell as motion velocity determinationat the image points.Authors Horn- Schunck [1] and Lucas-Kanade [2] first introduced the concept of the optical flow estimation.

    1. Classical Optical Flow:

      The Optical flow estimation is normally based on the following two assumptions:

      1. The observed brightnessorteintensity of particular object point in an image is constant over time.

      2. Closepoints- neighbourhood points, in the given image plane in the video move in a similar way (the velocity smoothness constraint).

        The Optical flow estimation gives correspondence in between the pixels in the current and its previous frame of an image sequence of that particular video sequence.

        x y

        dI = = I u+ I v+ It=0 (1)

        dt

        WhereI(x, y, t) is a sequence of continuous intensity image frames with the spatial co-ordinates (x, y) andthe variable of time t [0, T]. The flow vector,(u, v) = (xt,yt)points into the direction, where the pixel (x, y) is moving in that video sequence.

    2. Optimal Mass Transport(OMT) Optical Flow:

      As we have seen, the classical optical flow methods discussed in previous subsection are based on someassumptions, like the intensity or brightness constancy and flow smoothness, which are not related tothe fire motion.

      As a result, they are failed to model the existence of the fire in the images or video for two reasons. First, the fire does not assure the intensity or brightness constancy assumption asin Eq. (1), because rapid change of the intensity or brightness will take place in the burning processdue to the fast pressure as well as the heat dynamics of the flame or fire. Second, and most important, the smoothnessregularization might be counter-productive to the detection of flame or fire- motion, which is likely to have non-smooth motion field.

    3. Non-Smooth Data(NSD) Optical Flow:

    NSD optical flow is computationally not expensive, which is used to categorize between fire and non-fire motion.The NSD flow directions havecompletely driven by the data term under the disadvantage that flow magnitudes are not too large.But this method is not likely to perform well for typical optical flowapplications where the flow smoothness plays themost important role.

  4. PROGRAMMERS DESIGN

    Fig. 1Flame detection System

      1. Implementation

        1. Pre-processing:

          • Divide the input video into continuous RGB image frames.

          • Colour Transformation: Convert RGB- frames to scalar (gray-scale) image frames.

        2. Feature extraction module: To extract the features of the images.

          Optical Flow Estimation:

          • Optimal Mass Transport (OMT),

          • Non-Smooth Data(NSD)

            Essential Pixels: Detection of fast fire-like pixels and Rejection of little motion pixels

          • Features:

            • f1 = OMT transport energy,

            • f2 = NSD magnitude,

            • f3 = OMT source match,

            • f4 = NSD directional variance.

        3. Classification: SVM is used for classification.

          • Supervised Classification has been done using the Support Vector Machine.

          • Precision metric will be given by SVM as the output. Maximum precision means existence of fire in that particular video sequence.

            The processing is as shown in Fig. 2. The input RGB image is converted into gray-scale image (pre-processed image) along with its computed optical flow.

            Fig. 2 Processing

      2. Mathematical Model

    Let S be the flame or fir-detection in videos system defined as,

    S = (I, O, D, F, f(x))

    Where,

    I: Input video.

    O: Get Precisionmetric from SVM.

    D: It is the flame or fire Video/image database F: 4D feature vector.

    f(x): It consists a set of functions

    f(x) = {split_video(), rgb2scalar(), OMT(), NSD(), ess_pix(), get_features(),classify(), get_precision()}

    Step 1: split_video()

    First, we have to divide the input flame or fire video into the RGB frames of spatial dimension of 240×360 pixels.

    Then it will be I={I0, I1, I2, I3, ., In}

    Step 2: rgb2scalar()

    Then we will convert the RGB frames into gray- scale (scalar-valued) frames.

    Step 3: Optical Flow estimation

    We have to compute the optical flow to detect the fire or flame.

    • Optimal Mass Transport(OMT) and

    • Non-Smooth Data(NSD).

      OMT method is used to detect the dynamic textures in the image sequence, while NSD is used to differentiate in between the fire and non-fire pixels.

      Step 4: ess_pix()

      It is used to consider only fast moving flame or fire- like pixels, and to remove the non-fire like pixels from a particular candidate region.

      Consider a sub-region of a frame R2.

      The optical flow field is computed.Then, the set of essential pixels will be e.

      Detection rate

      True

      Positive

      False

      Positive

      Missing

      rate

      Movie 1

      44.0

      44.0

      0.0

      56.0

      Movie 2

      56.6

      82.6

      4.9

      43.4

      Movie 3

      9.1

      7.9

      0.0

      91.9

      Movie 4

      53.9

      7.1

      44.1

      4.7

      Movie 5

      88.2

      63.7

      10.0

      17.8

      Movie 6

      94.4

      94.4

      0.0

      5.6

      Movie 7

      100

      100

      0.0

      0.0

      Movie 8

      100

      100

      0.0

      0.0

      Movie 9

      87.6

      86.8

      0.8

      11.6

      Movie 10

      100

      100

      0.0

      0.0

      Movie 11

      100

      100

      0.0

      0.0

      Movie 12

      100

      100

      0.0

      0.0

      Average

      89.5

      87.4

      3.37

      10.7

      Table1 Results of Detection rates percentage

      Fig. 3 System Performance Graph

      Step 5: get_features()

      It is used to extract the features. 4D feature vector is calculated.

      F = (f1, f2, f3, f4)

      Step 6: classify()

      • It is used to detect the target with help of Support Vector Machine (SVM).

    Step 7: get_precision()

    Get the output from SVM, which is nothing but the precision metric. Maximum precision means existence fire in a video sequence.

  5. EXPERIMENTAL RESULTS

    The proposed SVM based flame detection method is implemented on a computer with an Intel core i5, 2.50 GHz processor with the image size of 240×360. The results for someof the test sequences are presented in Table 1. The minimum image framerate for flame or fire detectionis 5 frames per second (fps) and the maximumframe rate is 15 fps including all the processes. The frame rate can bevariable depend on the amountor size of flame or fire.

    Fig. 3 shows the system performance graph which shows that maximum feature sets will give us the maximum precision metric, which in turn gives better results. For the video processing in fig. 1 gives the precision metric 4.907688. From the Table 1, it is clear that SVM gives best results.We can improve these results by reducing the frame rate.

  6. CONCLUSIONS

Our proposed flame or fire-detection in videos system has been presentedthe two novel optical flow estimators, Optimal Mass Transport(OMT) andNon-Smooth Data(NSD)which overcome the drawbacks of theclassical optical flow models when applied to the fire-content.The observed motion fields are used to definevarious motion features. These features reliably detectedfire and rejected non-fire motion and gave us the better results.

Our method can be used for detection of flame or fire in video databases. It canbe integrated into a surveillance system monitoring an indooras well as outdoor area of interest, for early detection of fire.

REFERENCES

1] B. Horn and B. Schunck, Determining optical ow, Artif. Intell., vol.

17, nos. 13, pp. 185203, 1981.

2] B. Lucas and T. Kanade, An iterative image registration technique with an application to stereo vision, in Proc. Int. Joint Conf. Artif. Intell., vol. 2. 1981, pp. 674679.

3] B. Ko, K. Cheong, and J. Nam, Fire detection based on vision sensor and support vector machines, Fire Safety J., vol. 44, no. 3, pp. 322 329, 2009.

4] B. Toreyin, Y. Dedeoglu, U. Gudukbay, and A. Cetin, Computer vision based method for real-time fire and flame detection, Pattern Recognit.Lett., vol. 27, no. 1, pp. 4958, 2006.

5] C.B. Liu, N. Ahuja, Vision based fire detection, Int. Conf. Pattern Recognition 4 (2004) 134137

6] T. Chen, P. Wu, Y. Chiou, An early fire-detection method based on image processing, Int. Conf. Image Process. (2004) 17071710.

7] R. Fedkiw, J. Stam, and H. Jensen, Visual simulation of smoke, in Proc. Conf. Comput. Graph. Interact. Tech., 2001, pp. 1522.

8] S. Fazekas and D. Chetverikov, Analysis and performance evaluation of optical ow features for dynamic texture recognition, Signal Process., Image Commun., vol. 22, nos. 78, pp. 680691, 2007.

9] T. Çelik and H. Demirel, Fire detection in video sequences using a generic color model, Fire Safety J., vol. 44, no. 2, pp. 147158, 2009.

10] W. Phillips, III, M. Shah, and N. da Vitoria Lobo, Flame recognition in video, Pattern Recognit. Lett., vol. 23, nos. 13, pp. 319327, 2002.

Leave a Reply