- Open Access
- Authors : Arun Eeshwar S, Dhanasekaran B, Diwakar R, Gowtham Vishal S A, R. Shanmugasundaram
- Paper ID : IJERTCONV10IS08031
- Volume & Issue : ETEDM – 2022 (Volume 10 – Issue 08)
- Published (First Online): 30-07-2022
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Fire and Smoke Detection without Sensors: Image Processing based Approach
Arun Eeshwar S, Dhanasekaran B, Diwakar R, Gowtham Vishal S A
Department of Electronics and Communication Engineering, Knowledge Institute of Technology,
Salem, Tamilnadu
Mr. R. Shanmugasundaram
Assistant Professor,
Department of Electronics and Communication Engineering, Knowledge Institute of Technology,
Salem, Tamil Nadu
Abstract:- In this paper, novel models for fire and smoke detection usingimage processing are provided. The models use different col- our models for both fire and smoke. The colour models are extracted using a statistical analysis of samples extracted from different type of video sequences and images. The extracted models can be used in complete fire/smoke detection system which combines colour information with motionanalysis.
-
INTRODUCTION
Due to the rapid developments in digital camera technology and developments in content-based video processing, more and more vision-based fire detection systems are introduced. Vision based systems generally make use of three characteristic features of fire: colour, motion and geometry. The colour information is used as a pre-processing step in the detection of possible fire or smoke.
There are lots of fire detection systems in which the col- our information is used as a pre-processing step. Phillips et al. used colour predicate information and the temporal variation of a small subset of images to recognize fire in video sequences. A manually segmented fire set is used to train a system that recognizes fire like colour pixels. The training set is used to form a look-up table for the fire detection system. The authors offer the use of generic look-up table if the training set is not available. Chen et al. used chromatic and dynamic features to extract real fire and smoke in video sequences. They employ a moving object detection algorithm in the pre-processing phase. The moving objects are filtered with fire and smoke filter to raise an alarm for possible fire in video. They used a generic fire and smoke model to construct the corresponding filter. Töreyin et al. proposed a real- time algorithm for fire detection in video sequences. They combined motion and colour clues with fire flicker analysis on wavelet domain to detect fire. They have used a mixture of ten three dimensional Gaussians in RGB colour space to model a fire pixel using a training set. Töreyin et al. proposed another algorithm for fire detection which com- bines generic colour model based on RGB colour space, motion information and Markov process enhanced fire flicker analysis to create an overall fire detection system. They have employed the fire colour model developed by Chen et al. Later on, they have employed the same fire detection strategy to detect possible smoke samples which is used as early alarm for fire detection. They combined colour information with shape analysis to detect possible smoke samples where
false alarm rate is decreased using a flicker analysis of smoke region.
Recently, Celik et al. proposed a generic model for fire colour [1-2]. The authors combined their model with simple moving object detection. The objects are identified by the background subtraction technique. Later on, they have proposed a fuzzy logic enhanced approach which uses predominantly luminance information to replace the existing heuristic rules which are used in detection of fire-pixels. YCbCr color space is used rather than other colour spaces because of its ability to distinguish luminance from chrominance information. The implicit fuzziness or uncertainties in the rules obtained from repeated experiments and the impreciseness of the output decision is encoded in a fuzzy representation that is expressed in linguistic terms. The single out- put decision quantity is used to give a better likelihood that a pixel is a fire pixel. The fuzzy model achieves better dis- crimination between fire and fire like- coloured objects.
Since the colour based pre-processing is essential part for all image processing-based fire and smoke detection systems, an efficient colour model is needed. In this paper, we have further improved the model defined in our previous work to detect fire pixels using fuzzy logic and proposed a model for smoke-pixel detection. The proposed colour model for fire detection is compared with the existing techniques.
-
COLOUR MODELS FOR FIRE AND SMOKE In order to create a colour model for fire and smoke, we have analyzed the images which consist of fire or smoke samples. YCbCr colour space is chosen intentionally because of its ability to separate illumination information from chrominance more effectively than the other colour spaces. The rules defined for RGB colour space in order to detect possible fire- pixel [1-3] or smoke-pixel candidates can be trans- formed into YCbCr colour space and analysis can be per- formed. However, the rules fall short in coming up with a single quantitative measure which can indicate how likely a given pixel is a fire pixel. The implicit fuzziness or uncertain- ties in the rules obtained from repeated experiments and the impreciseness of the decision variable can be encoded in a fuzzy representation. This provides a way to express the out- put decision in linguistic terms. The single output decision then give the likelihood that a pixel is a fire-pixel or smoke- Fig.
2a, 2b and 2c show respectively membership func-
pixel. As will be shown later in the paper, this fuzzy output is also capable in better discriminating fire and fire-like col- oured objects with respect to rules defined in [1-3].
-
Fire detection
The detection of fire is carried out using the YCbCr samples. We have observed that the fire samples show some determi- nistic characteristics in their colour channels of Y, Cb, and Cr. In Figure 1, an image with fire and its colour channels are shown. As can be observed from Figure 1, for a fire pixel it is more likely that, Y(x,y) is greater than Cb(x,y) where (x,y) refers to pixels spatial location. This is because the luminance information which is related to the intensity is naturally expected to be dominant for a fire pixel. Repeated experiments with fire images have shown that the greater the difference between Y(x,y) and Cb(x,y) components of a pixel, the higher the likelihood that it is a fire pixel. Fig. 1 also hints that Cb(x,y) should be smaller than Cr(x,y). Similarly, a higher discrimination between Cb(x,y) and Cr(x,y) means that corresponding pixel is more likely a fire pixel.
So we can summarize overall relation between Y(x,y), Cb(x,y), and Cr(x,y) as follows:
Y x, y Crx, y Cbx, y
total of 16 rules are constructed to account for all possible combinations of input variables.
-
(b)
-
-
(c)
Figure 2 – Membership functions for (a) Y(x,y) – Cb(x,y), (b)
Cr(x,y) – Cb(x,y), and (c) Pf x, y
Given image, each channel is normalized with respect to following formula:
(1 Y Y ,Cb Cb ,Cr Cr
(2)
)tions for Y(x,y) – Cb(x,y), Cr(x,y) – Cb(x,y), and fP x, y.
It should be noted that, Mamdani (Klir et al., 1995) type
I max
I max
I max
fuzzy inference system (FIS) is used with the rules defined in Table
-
(b)
(c) (d)
Figure 1 – RGB input image and its Y, Cb and Cr channels, (a) Original RGB image (b) Y channel, (c) Cb channel, (d) Cr channel.
Let Pf x, y be defined as a measure that shows how likely a pixel located at spatial location (x,y) belongs to fire
- The distribution of membership functions and rules de- fined in Table 1 is found using experimentalanalysis.
Table 1 show the rules used in our FIS. The rules are defined in such a way to reflect our expectation for a fire pixel. A
pixel. Its range is in the range of01, and it is a mapping of
the observation defined in (1) to a quantity which describes the likelihood that a given pixel is a fire pixel. In order to evaluate Pf x, y, combination of triangular and trapezoidal membership functions are used both for the differences between Cr(x,y) and Cb(x,y), (i.e., Cr(x,y) – Cb(x,y)), and the difference between Y(x,y) and Cb(x,y), (i.e., Y(x,y) – Cb(x,y)).
where I max is the maximum intensity value in set definedby the combination of Y, Cb, and Cr channels. The equation in
-
normalizes all the samples to the interval of [0 1]. So that their difference is in the range of [-1 1] which is used in membership function definitions as shown in Figure 2. Given a set of inputs from Y(x,y) – Cb(x,y) a and Cr(x,y) – Cb(x,y), the crisp output of the fuzzy system is computed as follows: first, the inputs are fuzzified based on the membership functions shown in Figure 2a and Figure 2b. Then, the min implication operator [4] is applied on the fuzzy rules. Centre of area defuzzification is applied on the union of all rule outputs
in order to find a quantitative measure for Pf x, y.
Y(x,y) – Cb(x,y), and Cr(x,y) – Cb(x,y) is normalized to
1 1 before entering into FIS. The surface for 16 rules is shown in Figure 3. The figure shows the likelihood Pf
x, y
as a function of inputs Y(x,y) – Cb(x,y) and Cr(x,y) – Cb(x,y).The visual appearance of Figure 3 is as expected and interpreted as follows; when Y x, y Cbx, y is less than zero, corresponding Pf x, y approaches to 0, and if both
Yx, y Cbx, y and Crx, y Cbx, y gets closer to
value of 1.0, the Pf x, y approaches to 1. The plateau be-
low the peak is used to give low likelihood to objects which show similar behavior to fire colour but it is not the fire, i.e., the smoke pixels around fire regions which is coloured like fire because of reflectance property of smoke pixels. It is
clear from the surface that, the Pf gets high values at the
values where Y x, y Cbx, y and haves like fire.
Cr(x,y) Cb(x,y)
NS
PS
PM
PB
Y(x,y)-Cb(x,y)
NS
LO
LO
LO
LO
PS
LO
LO
HI
HI
PM
LO
HI
HI
ME
PB
LO
ME
ME
ME
Pf(x,y)
Crx, y Cbx, y be-
Table 1-Rule table for fuzzy inference system.
-
(b)
-
Figure 4 – RGB input image and its Pf, Column (a) RGB input image, Column (b) Pf for corresponding input image.
2.2 Smoke detection
Similar to the fire detection, we can model the smoke pixels. But the smoke pixels do not show chrominance characteristics like fire pixels. At the beginning, when the temperature of the smoke is low, it is expected that the smoke will show colour from the range of white-bluish to white. Toward the start of the fire, the smokes temperature increases and it gets colour from the range of black-grayish to black. As can be seen from the Figure 1, most smoke samples have a grayish colour. So we can formulate the smoke pixels asfollows,
Rx, y Gx, y Th
Gx, y Bx, y Th Rx, y Bx, y Th
(3)
Figure 3- View of surface of rules given in Table 1 and used in FIS with different views.
Figure 4 shows some images and its corresponding Pf . It is
clear that Pf gets higher values over fire regions and lower
where Th is a global threshold ranging from 15 to 25. The equation (3) states that, the smoke pixels should have similar intensities in their RGB colour channels. Figure 5 shows the smoke-pixel segmentation using the equation defined in (3). Since the smoke information will be used for early fire detection system, the smoke samples should be detected when the smoke has low temperature. This is the case, where the smoke samples have colour ranging from white-bluish to white, which means that the saturation of the colour should be as low as possible. Using this idea, the rule defined in (4
is used where HSV colour spaces is used, and its application with (3) is shown in Figure 6.
values over non-fire regions. Note that, Pf has values in [0,1] S(x,y) <= 0.1
(a) (b)
Figure 5 – RGB input image and smoke segmentation, Column (a) RGB input image, Column (b) Segmentedsmoke image using the equation (3).
As can be seen from the Figure 5, the output is noisy, but the motion property of the smoke can be used to remove such noisy parts. It can be easily observed from the first row of the Figure 5, the sky is detected as smoke, because its property of grayish colour. But, if we embed themotion detection part, the sky will be removed because of its constant colour over some duration.
(a)
(b)
Figure 6 – RGB input image and smoke segmentation, Column (a) RGB input image, Column (b) Segmentedsmoke image using the equation (3) and (4).
-
COMPARISON AND ANALYSIS
We have compared our new fire-pixel detection model with the existing one. The smoke pixel detection tests are not carried out because of smokes nature. It doesnt show robust characteristics like the fire-pixels. Because of this reason, we need more discriminative features for smoke detection which is needed to be combined with the smoke colour model, i.e., it smoothes edges of the background when the smoke starts, which is out of this papersscope.
For the comparison purposes, two sets of images are collected from Internet. One set is composed of images that con- sist of fire. Fire set consists of 332 images. The images in fire set show diversity in fire-colour, and environmental Illuminations. The other set does not contain any fire but contains fire-like coloured regions such as sun and other reddish objects.
Two types of comparisons are carried out; one is for the evaluation of the correct fire detection rate and the other is for the false alarm rate. The following criterion is used for declaring a fire region: if the model achieves to detect at least 10 pixels of a fire region in a given image, then it is assumed as a correct detection, where images are in the size of 320×240. For the false alarm rate the same detection criterion is used with the non-fire imageset.
In Table 2, we have tabulated fire detection results with false alarm rates. It is clear from Table 2 that, the new method supported with fuzzy logic and defined in YCbCr colour space outperforms the models developed in other col- our spaces both in high detection rate and low false alarm rate. The new method shows better performance with respect to the technique defined in, because it eliminates the colours which are similar to fire-colour but it is not a fire. The new technique doesnt require a colour defined in CbCr colour plane.
As it can be observed from Table 2, YCbCr colour space outperforms other colour spaces both in correct detection rate and false alarm rate. This is due to the ability of YCbCr col- our space to separate luminance from chrominance. For models in [1-3], the rules fall short in describing a single quantitative measure which can indicate how likely a given pixel is a fire pixel. As a result, it becomes difficult to dis- criminate between fire regions and fire-like regions. The implicit fuzziness or uncertainties in the rules is encoded in a fuzzy representation. This provides a way to expess the out-put decision in linguistic terms. As a result, the most needed discrimination between fire and fire-like regions is enhanced. This is clearly reflected in Table 2 with a 4.5 % false alarm rate, which is a reduction of 5 % compared to the method defined in [10].
Model
Detection Rate (%)
False Alarm Rate (%)
RGB, Chen et al. [3]
93.90
66.42
RGB, Celik et al. [1]
78.50
28.21
rgb, Celik et al. [2]
97.00
78.39
YCbCr, Celik et al. [10]
99.00
9.50
Proposed
99.00
4.5
Table 2: Performance comparisons of the models with respect detection rates, and false alarm rates.
Figure 7 Samples from a video sequence and its Pf
In Figure 7, we have shown the application of proposed colour models to a video sequence. The sequence consists ofa forest fire which is obtained from the internet. The
sequence is recorded from a view of a helicopter, and the cam- era is not static. It is clear that, the proposed models detect the fire effectively.
-
CONCLUSIONS
-
We have developed two models: one for fire detection and the other is for the smoke detection. For fire detection, the concepts from fuzzy logic are used to replace existing heuristic rules and make the classification more robust in effectively discriminating fire and fire like coloured objects. The model achieves up to 99.00% correct fire detection rate with a 4.50% false alarm rate. For smoke detection, a statistical analysis is carried out using the idea that the smoke shows grayish colour with different illumination. The developed models can be used as pre- processing stage for fire or smoke detection systems such as. As a future work, region-based fire and smoke recognition will be studied.
REFERENCES
[1] Celik, T., Demirel, H., Ozkaramanli, H., Uyguroglu, M., Fire Detection in Video Sequences Using Statistical Color Model, Proc. Internat. Conf. on Acoustics, Speech, and Signal Processing, vol. 2, no.pp. II-213 – II-216, May 2006. [2] Celik, T., Demirel, H., Ozkaramanli, H., Automatic Fire Detection in Video Sequences, European Signal Processing Conference, EUSIPCO-06, Sept. 2006. [3] Chen, T., Wu, P., Chiou, Y., An early fire-detection method based on image processing, Proc. IEEE Internat. Conf. on Image Processing, ICIP04, pp. 1707-1710, 2004. [4] Klir, G. J., Yuan B., Fuzzy Sets and Fuzzy Logic, Prentice Hall,1995. [5] Mathews, J.H., Fink, K.D., Numerical Methods using MATLAB,PrenticeHall, 1999.
[6] Phillips III, W., Shah, M., Lobo, N.V, Flame recognition in video, Pattern Recognition Lett. 23 (13), 319327, 2002. [7] Töreyin, B.U., Dedeolu, Y., Güdükbay, U., Çetin, A.E, Computer vision-based method for real-time fire and flame detection, Pattern Recognition Lett., 27 (1-1), 49-58, 2006. [8] Töreyin, B.U., Dedeolu, Y., Çetin, A.E.,Flame detection in video using hidden Markov models, Proc. IEEE Internat. Conf. on Image Processing, pp. 1230-1233, 2005. [9] Töreyin, B.U., Dedeolu, Y., Çetin, A.E., CONTOUR BASED SMOKE DETECTION IN VIDEO USINGWAVELETS, EuropeanSignal Processing Conference, EUSIPCO-06, Sept. 2006.
[10] Turgay Celik, Huseyin Ozkaramanli, Hasan Demirel, FIRE PIXEL CLASSIFICATION USING FUZZY LOGIC AND STATISTICAL COLOR MODEL, ICASSP 2007. [11] Turgay Celik, Hasan Demirel, Huseyin Ozkaramanli, Mustafa Uyguroglu, Fire Detection Using Statistical Color Model in Video Sequences, Journal of Visual Communication and Image Representation (2007), doi:10.1016/j.jvcir.2006.12.003.