Fire Detection Approaches for the Modern World: A Review

Download Full-Text PDF Cite this Publication

Text Only Version

Fire Detection Approaches for the Modern World: A Review

Juby Susan George

Saintgits College of Engineering Kottayam, India

Julianne Elsa Thomas

Saintgits College of Engineering Kottayam, India

Manuel George Thomas*

Saintgits College of Engineering Kottayam, India

Richie George Thomas

Saintgits College of Engineering Kottayam, India

Remya S

Saintgits College of Engineering Kottayam, India

intensity, size and location of the fire. By detecting fire

Abstract- Over the years, sensor technologies have been devel- oping significantly all over the world. Several sensors have been developed for various applications, such as detecting vehicle ob- stacles, tracking and surveillance, and so on. Due to the im- proved capabilities of cameras, surveillance has largely captured the attention of scientists among these many applications. This paper presents a review of various fire sensing technologies and evaluates the performances of the different methods reviewed. This paper primarily concentrates on the framework of the var- ious architectures, the capability of the various methods to de- tect fire and the performance evaluation of the frameworks that were reviewed.

Keywords- Fire detection, convolutional neural networks, deep learning, R-CNN, flame detection, boundary roughness, smoke detection, CUDA


Globally, millions of hectares of land are destroyed due to fire. Thousands of lives including those of humans and animals are lost every year. If fire is not properly contained, the damage caused by this can be massive. Therefore, it is paramount to detect fire at the early stage to prevent the cause of large scale damage. This can be done by detecting fire using surveillance cameras, locating the fire and then notifying the fire department or the disaster management team immediately so that the fire can be contained without causing further damage. There are two approaches for fire detection: (1) using traditional fire alarms, and (2) using vision sensor- assisted fire detection systems. Traditional fire alarms are based on sensors that require close proximity for activation (e.g : smoke sensors). During the initial stage of a fire, sometimes smoke may not be produced much. In that case, these systems will take longer to detect the fire. These systems also require human involvement to confirm whether the fire is dangerous or not. These systems can also cause false alarms. Vision sensor- assisted fire detection systems use cameras for real-time fire detection. As a result, it responds faster than traditional fire alarms and requires less human interference. It can cover large areas and can confirm fire without having to visit the location. They are affordable and can provide information about the

quickly and accurately at its incipient stage, the emissions of flammable toxic products, as well as the greenhouse gases produced by the fire itself can be reduced.

These environmental effects are often overlooked, but undoubtedly come from all fire events.


  1. Approach I: Based on Heat

    Fires produce heat in the surrounding area. A heat sensor can be used to detect the heat produced. The different types of heat sensors are fixed temperature, rate of rise and rate of compensation. Once the temperature crosses a specific threshold value, the fixed temper- ature sensor gets activated. This threshold value may be 58 degree Celsius or above. Fixed temperature sensors can be divided into 3 categories, namely:fusible-element, distributed, and bimetals [12]. Fusible-element heat detect consists of a eutectic alloy. When the temperature increases to the eutectic temperature, the alloy changes state from a solid to a liquid-like solder. An electric circuit is made and an alarm is actuated due to the release of a spring held un- der pressure. Distributed heat detector comprises a twisted pair of electrical conductors separated from each other by a heat sensitive insulator, and enclosed in a protective sheath. An electric circuit is created by the twisted conductors and an alarm is actuated since the insulator changes physical state from a hard solid to a molten state when exposed to heat. Electrical, optical, and sheathed thermocouple are the 3 categories of distributed heat sensors. In case of fire, the change of some parameters of cable with temperature occurs. The change of wire resistance and surface temperature are the working principles of electrical and sheathed thermocouple type heat sensors. The working principle of bimetallic type heat sensors is thermal expansion of the metals [12]. Two metals with different expansion rates are bonded together to form one piece of metal which is used as the

    bimetallic strip in fire alarms. The open electrical circuit is closed by the strip which activates an alarm as a result of heat. A heat sensitive insulator separates one or more fibre optic cables which are protected by an outer sheath in optomechanical heat detectors. The fibre optic cable transmits a focused light signal. When exposed to heat, the heat sensitive insulator changes state from a solid to a molten state which results in discontinuation of the focused light sig- nal. An alarm is actuated by a device that monitors the signal change. The electronic types of heat detectors use one or more thermistors for temperature detection [12].

  2. Approach II: Based on Gas

    The air quality parameters vary when compared to the composition of good air quality during a fire and carbon monoxide and hydrogen

    Fig. 1: Acoustic Wave Based Gas Sensing.

    cyanide are the most toxic gases responsible for fire casualties [12]. If the carbon monoxide level increases then the oxygen level decreases. Smouldering fire is indicated by a low change in concentration of oxygen. It is also called a low intensity fire. However, liquid fuel fires or fast burning fires are indicated by a large change in concentration of oxygen.

    Semiconductor metal oxide gas sensors are highly sensitive and cost less. However, these sensors have stability issues that may trigger false alarms. Molecules having certain dimensions can be sieved and allowed to enter the pores of zeolites. This makes zeolites useful for gas separation. Therefore, the selectivity of gas sensors can be enhanced by applying certain layers of material, such as zeolite, to metal oxides [18]. Acoustic wave-based gas sensing methods are recorded to display a shift in acoustic wave velocity. A wavelength or intensity modulated LASER beam is designed to pass through the test gas as shown in Fig. 1 [28]. The gas molecules absorb and release the energy of the LASER beam, creating an acoustic wave that is detected by an acoustic detector. Information regarding gas concentration is provided based on the magnitude of the acoustic wave.

  3. Approach III: Based on Smoke

    Smoke detection is a crucial factor in preventing disasters and inci- dents. Given the large range of methods and sensors proposed for smoke detection, none have been able to sustain a high frame rate while enhancing detection efficiency. There is currently a noticeable demand for automatic smoke detection systems that work

    quickly while requiring low maintenance costs. These surveillance systems are used for detecting smoke on their own or for early warning of fires. For the latter case, the flames do not appear in front of the camera during the first moments after the ignition, but the burning materials emit pillars of smoke which occupy larger volumes. In such situations, an incident can be observed even though the source of the fire is concealed behind another object, such as a fence.

    The problem discussed in this paper is the detection of smoke in front of a stationary camera. Smoke may be the early precursor of a fire, and its rapid detection may reduce the damage caused by a fire. Since low-cost surveillance systems tend to use low- resolution cam- eras, the algorithm should work quickly and be able to detect smoke from low-resolution video data. However, low- cost high-definition (HD) cameras are currently emerging whose data cannot be quickly processed using existing surveillance task algorithms. This paper

    Fig. 2: Smoke detection algorithm. The gray background area represents the steps performed using CUDA.

    presents a smoke detection method for surveillance cameras that relies on the shape features of the smoke regions as well as color information. The technique involves the use of a stationary cam- era to detect changes in the scene using a background subtraction process. The color of the smoke is used to assess the probability that the pixels of the scene belong to the smoke region which is present on the frame. Not all pixels of the actual smoke area appear in the foreground mask due to the variable smoke intensity. These separate pixels are combined with morphological operations and associated-component marking methods. The presence of a smoke area is confirmed analyzing the roughness of the boundary.

    The final step of the algorithm is to test the density of

    edge pixels in an area. Comparison of objects in the current and previous frames is conducted to distinguish between fluid smoke regions and rigid moving objects. Some parts of the algorithm have been

    Fig. 3: Results of detection by the algorithm (second row) and detection by (third row).


    boosted by parallel processing using CUDA GPUs, allowing quick processing of both low-resolution and high-definition videos. The hybrid approach of CPU and GPGPU use in one algorithm where steps that benefit from parallelization are implemented using CUDA as shown in Fig. 2 [10]. In our comparison with the implementation of the proposed method for CPU we see that the hybrid approach applied to HD videos allows keeping the processing time less than 200ms as appropriate surveillance systems while CPU only provides less than two frames per second of performance [10].

    The algorithm was tested on multiple video sequences and demon- strated adequate processing time for a realistic range of frame sizes as shown in Fig. 3 [36]. In future work, the most recent background subtraction method will be used in parallel to achieve noise-resistant detection. In parallel CPU threads, boundary roughness and edge density can be calculated. An interesting approach to the combi- nation of threaded building blocks (TBB) and CUDA will be used in the manner explained in Intelligent Classifiers to replace simple step-by-step smoke detection.

  4. Approach IV: Based on Flames

Flames have different shapes and textures, there are

distractions such as flame in colour. For these reasons it is very difficult to determine the shape of the flames in the images. With conventional flame detection methods, artificial features are often used to detect flames [1]. To solve the problem of high-level false alarm when using direct detection of common object detection in the flame detection method, it is suggested that the flame-guided detection method, this strategy uses the same network to create global image information.As shown in Fig. 4 [3],they use these two methods to enhance Faster R-CNN (Regions with Convolutional Neural Network features) to perform a fireplace detection process during a controlled manner.(Regions with Convolutional Neural Network features) [24] to perform a fire detection process in a controlled manner. .They did research on the BoWFire database and showed that the method improved the acquisition speed by 10.1% compared to the original Faster R-CNN and the false alarm was reduced by 21.5%, and the overall acquisition accuracy increased by 9.3%. PascalVOC [8] tests and Corsician

Fig. 4: Model structure of global information-guided flame de- tection.

databases strongly indicate the proposed methods. The improved results of the methods are due to two reasons. One explanation is the powerful features used by Faster R-CNN and GIN, and the other is the richness of our training set.

Comparison with other methods.

. First, the effect of a color-coded guiding strategy on R-CNNs fast-paced approach is read in the BoWFire

[6] database. As shown in Table II [3], the results are compared with the method of Chino et al. [6] , which used handmade features to detect flame, and the methods of Muhammad et al. [20][22] using SqueezeNet and Mo- bileNet. The results of accurate comparisons, memory measurement and F measurement metrics, as well as comparative results of false positives, false benefits and accuracy metrics are given in Table II [3].

Table II [3] exhibits that the colour guided anchoring [33] tech- nique can improve the review of the fire discovery. The flames in 97.48% of fire pictures are accurately recognized, however this will likewise increment false positives. The general precision of the dataset is marginally expanded. Likewise, as noted in Table II [3], the benefit of the Faster R-CNN is the review of the fire target, yet the disadvantages are the non appearance of the global data and the restrictions of the preparation information. The quantity of false positives utilizing just Faster R-CNN is more noteworthy than 25%,

which demonstrates that this technique alone is as yet inadequate de- spite testing pictures. In addition, they found that most of the objects mistakenly identified as flames were objects of warm colors, such as sunsets and light. Compared with other methods based on CNN [19], the proposed method is superior regarding precision, F-value, false positives and accuracy. Sample images of the detection results guided by GIN are given in Fig. 5 [3]. The artificial light source in Fig. 5(b) [3] and the sun in Fig. 5(d) [3] are not

Fig. 5: Detection results guided by GIN: (a) flame in a factory (top left); (b) street with artificial light sources (top right); (c) burning house (down left); (d) river in sunset (down right).

is an easy way to get started the effectiveness of Faster R-CNN in finding flames. The second method is developed to deal with the problem of false positive, by using the flame guided by global flame detection that significantly reduces the incorrect alarm rating by using the same Global Information Network (GIN). We have com- pared our methods with the three available ones using the BoWFire database, and the results show that our methods have gained greater accuracy and reduced false positives. We also checked the speed on BoWFire database to prove the benefit of colour- correction strategy. Corsician Fire Database test and PascalVOC2012 data confirms the stability of our system methods. Improved results for methods are identified for two reasons. First explanation is the powerful features used by Faster R-CNN and GIN, and another richness of our training database. In the next step, they are planning to learn how to share a feature map between Faster R-CNN and GIN to reduce the slow- down of computers by training them both at the same time. Finally, they will re-use the proposed methods in a robot that fights a fire and tests its effectiveness.

Another method proposed was a system containing a lightweight CNN based on the SqueezeNet architecture. This system mainly focuses on the detection and localization of the fire. This system can minimize human interaction, cause faster response and has af- fordable cost. Furthermore, these systems can confirm a fire without requiring a visit to the location of the fire, and can also provide information about the fire

detected as flames, because the two images are noted as non-flame by GIN.

In total, thi paper proposes the discovery of guided fire flames methods. The first method is color- coded anchoring strategy, which


including its location, size, degree, etc. A SqueezeNet deep neural network model modified according to the target problem was used. The SqueezeNet model was trained on the ImageNet dataset which contains approximately 14 million images and it is capable of classifying 1000 different objects. According to the target problem, it is only necessary to classify the images as fire or non-fire. Therefore, the number of neurons was reduced from 1000 to 2 in the final layer. The proposed model consists of two convolutional layers, three max pooling layers, one average pooling layer, and eight modules called fire modules. The input of the model consists of color images with dimensions of 224×224×3 pixels. In the first convolution layer, 64 filters of size 3×3 are applied to the input image which generates 64 feature maps. The first max


pooling layer selects the maximum activations of those 64 feature maps with a stride of two pixels. Hence, the most useful features are retained and the less important features are discarded. The first two fire modules, fire2 and fire3, have 128 filters. Fire4 and fire5 have 256 filters, fire6 and fire7 have 384 filters, and fire8 and fire9 have 512 filters. Each fire

module consists of two additional convolutions: squeezing and expansion. As the name suggests, the squeeze layer and expansion layer squeezes and expands the input respectively. The selected SqueezeNet model was pretrained on the ImageNet dataset. Hence, it is capable of classifying 1000 objects. In this case, the final convolution layer is modified according to the target problem by reducing the number of classes to two, namely, fire and normal. The output from the average pooling layer is fed into the Softmax classifier to calculate the probabilities of the two target classes, namely, fire and normal.To avoid the problem of overfitting, various models were trained on the collected training data and the performance of all these models were evaluated.

Transfer learning strategy was also examined to further improve the accuracy of the proposed model. Using the transfer learning strategy, the pretrained SqueezeNet architecture was fine-tuned with a learning rate of 0.001. The method of fine-tuning was done for 10 epochs. This increased the accuracy from 89.8% to 94.50%. An improvement of approximately 5% in the accuracy was achieved using transfer learning strategy. The experimental results obtained using both Dataset 1 and Dataset 2 are compared with various fire detection methods as shown in Table III

[20] and Table IV [20] respectively.

As shown in Table III [20], the proposed method was compared with several fire detection algorithms. In terms of false negatives, the findings show that Çelik and Demirel [2] and Foggia et al. [11] are the best algorithms. The algorithm of Habiboglu et al. [13] performs best among the other methods on the basis of false positives. Its false negative rate, however, is 14.29%, the worst result among all the methods that were considered. The AlexNet architecture was used without fine tuning, which resulted in 90.06% accuracy and reducing


the false positives from 11.67% to 9.22%. Transfer learning strategy was also used which improved accuracy by 4.33% and decreased the false negatives and false positives by 8.52% and 0.15% respectively. The results of the proposed model were compared with various fire detection algorithms in terms of relevance, dataset and year of publication. In order to ensure a full overview of the performance of the proposed approach, yet another set of metrics (precision, recall and F-measure) was considered. Dataset2 was evaluated in the same way as Dataset1 with the fine-tuned AlexNet and the proposed fine-tuned SqueezeNet models. Using the proposed model, further improvement was achieved. The F-measure score was increased from

0.89 to 0.91 and the precision was increased from 0.82 to 0.86. Table IV [20] shows that the proposed method has achieved better results than the state-of-the-art methods. It is evident from Table III

[20] and Table IV [20] that the proposed model performed much better than any of the state-of-the-art models. The findings support the efficacy of the proposed framework.


Detecting early fires with less false positives is the primary goal of a fire sensing system. If the sensors provide quick responses then fire can be extinguished before it destroys life and property. Each approach has its own set of strengths and disadvantages . Although Approach I has slow response time, it is reliable and it gives less false positives. However, the rate of false positives in Approach IV is almost the same as the existing models but it can process multiple surveillance streams. Computational redundancy is another issue faced in Approach IV. Some of the major problems of Approach II are irreversibility, instability and poor selectivity. Approach III is susceptible to noise but it outperforms the state-of-the-art work by processing frames four times faster when HD resolution is used. Combining the key strengths of all these fire sensing technologies can overcome the drawbacks of the traditional system that are seen today.


  1. Panagiotis Barmpoutis, Kosmas Dimitropoulos, Kyriaki Kaza, and Nikos Gramma- lidis. Fire detection from images using faster r- cnn and multidimensional texture analysis. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 83018305. IEEE, 2019.

  2. Turgay Celik and Hasan Demirel. Fire detection in video sequences using a generic color model. Fire safety journal, 44(2):147158, 2009.

  3. Chenyu Chaoxia, Weiwei Shang, and Fei Zhang. Information- guided flame detection based on faster r-cnn. IEEE Access, 8:5892358932, 2020.

  4. Jiaqiu Chen, Yaowei Wang, Yonghong Tian, and Tiejun Huang. Wavelet based smoke detection method with rgb contrast-image and shape constrain. In 2013 Visual Communications and Image Processing (VCIP), pages 16. IEEE, 2013.

  5. Thou-Ho Chen, Ping-Hsueh Wu, and Yung-Chuen Chiou. An early fire-detection method based on image processing. In 2004 International Conference on Image Processing, 2004. ICIP04., volume 3, pages 17071710. IEEE, 2004.

  6. Daniel YT Chino, Letricia PS Avalhais, Jose F Rodrigues, and Agma JM Traina. Bowfire: detection of fire in still images by integrating pixel color and texture analysis. In 2015 28th SIBGRAPI conference on graphics, patterns and images, pages 95102. IEEE, 2015.

  7. Rosario Di Lascio, Antonio Greco, Alessia Saggese, and Mario Vento. Improving fire detection reliability by a combination of videoanalytics. In International Conference Image Analysis and Recognition, pages 477484. Springer, 2014.

  8. Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The pascal visual object classes (voc) challenge. International journal of computer vision, 88(2):303338, 2010.

  9. Alexander Filonenko, Danilo Cáceres Hernández, and Kang-Hyun Jo. Smoke de- tection for static cameras. In 2015 21st Korea- Japan Joint Workshop on Frontiers of Computer Vision (FCV), pages 14. IEEE, 2015.

  10. Alexander Filonenko, Danilo Cáceres Hernández, and Kang-Hyun Jo. Fast smoke detection for video surveillance using cuda. IEEE Transactions on Industrial Informatics, 14(2):725733, 2017.

  11. Pasquale Foggia, Alessia Saggese, and Mario Vento. Real-time fire detection for video-surveillance applications using a combination of experts based on color, shape, and motion. IEEE TRANSACTIONS on circuits and systems for video technology, 25(9):15451556, 2015.

  12. Anshul Gaur, AbhishekSingh, Ashok Kumar, Kishor S Kulkarni, Sayantani Lala, Kamal Kapoor, Vishal Srivastava, Anuj Kumar, and Subhas Chandra Mukhopad- hyay. Fire sensing technologies: A review. IEEE Sensors Journal, 19(9):3191 3202, 2019.

  13. Yusuf Hakan Habiboglu, Osman Günay, and A Enis Çetin. Covariance matrix- based fire and flame detection method in video. Machine Vision and Applications, 23(6):11031113, 2012.

  14. Li Jinghong, Zou Xiaohui, and Wang Lu. The design and implementation of fire smoke detection system based on fpga. In 2012 24th Chinese Control and Decision Conference (CCDC), pages 39193922. IEEE, 2012.

  15. Byoung Chul Ko, Kwang-Ho Cheong, and Jae-Yeal Nam. Fire detection based on vision sensor and support vector machines. Fire Safety Journal, 44(3):322329, 2009.

  16. Anuj Kumar, Abhishek Singh, Ashok Kumar, Manoj Kumar Singh, Pinakeswar Mahanta, and Subhas Chandra Mukhopadhyay. Sensing technologies for moni- toring intelligent buildings: A review. IEEE Sensors Journal, 18(12):48474860, 2018.

  17. Yu Cheng Lee, Chin-Teng Lin, Chao Ting Hong, and Miin-Tsair Su. Smoke detection using spatial and temporal analyses. International Journal of Innovative Computing, Information and Control, 8(7A):47494770, 2012.

  18. Dominic P Mann, Keith FE Pratt, Themis Paraskeva, Ivan P Parkin, and David E Williams. Transition metal exchanged zeolite layers for selectivity enhancement of metal-oxide semiconductor gas sensors. IEEE Sensors journal, 7(4):551556, 2007.

  19. Khan Muhammad, Jamil Ahmad, and Sung Wook Baik. Early fire detection using convolutional neural networks during surveillance for effective disaster management. Neurocomputing, 288:3042, 2018.

  20. Khan Muhammad, Jamil Ahmad, Zhihan Lv, Paolo Bellavista, Po Yang, and Sung Wook Baik. Efficient deep cnn-based fire detection and localization in video surveillance applications. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 49(7):14191434, 2018.

  21. Khan Muhammad, Jamil Ahmad, Irfan Mehmood, Seungmin Rho, and Sung Wook Baik. Convolutional neural networks based fire detection in surveillance videos. IEEE Access, 6:1817418183, 2018.

  22. Khan Muhammad, Salman Khan, Mohamed Elhoseny, Syed Hassan Ahmed, and Sung Wook Baik. Efficient fire detection for uncertain surveillance environment. IEEE Transactions on Industrial Informatics, 15(5):31133122, 2019.

  23. Ali Rafiee, Reza Dianat, Mehregan Jamshidi, Reza Tavakoli, and Sara Abbaspour. Fire and smoke detection using wavelet analysis and disorder characteristics. In 2011 3rd International Conference on Computer Research and Development, volume 3, pages 262265. IEEE, 2011.

  24. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: To- wards real-time object detection with region proposal networks. arXiv preprint arXiv:1506.01497, 2015.

  25. Lucile Rossi, Moulay Akhloufi, and Yves Tison. On the use of stereovision to de- velop a novel instrumentation system to extract geometric fire fronts characteristics. Fire Safety Journal, 46(1- 2):920, 2011.

  26. Steve Rudz, Khaled Chetehouna, A Hafiane, Hélène Laurent, and Olivier Séro- Guillaume. Investigation of a novel image segmentation method dedicated to forest fire applications. Measurement Science and Technology, 24(7):075403, 2013.

  27. Deborah Snoonian. Smart buildings. IEEE spectrum, 40(8):18 23, 2003.

  28. Yanzhen Tan, Congzhe Zhang, Wei Jin, Fan Yang, Hoi Lut Ho, and Jun Ma. Optical fiber photoacoustic gas sensor with graphene nano-mechanical resonator as the acoustic detector. IEEE journal of selected topics in quantum electronics, 23(2):199209, 2016.

  29. Hongda Tian, Wanqing Li, Lei Wang, and Philip Ogunbona. A novel video-based smoke detection method using image separation. In 2012 IEEE International Conference on Multimedia and Expo, pages 532537. IEEE, 2012.

  30. B Ugur Toreyin, Yigithan Dedeoglu, and A Enis Cetin. Contour based smoke detection in video using wavelets. In 2006 14th European signal processing conference, pages 15. IEEE, 2006.

  31. Jinn-Tsong Tsai, Kai-Yu Chiu, and Jyh-Horng Chou. Optimal design of saw gas sensing device by using improved adaptive neuro-fuzzy inference system. IEEE Access, 3:420429, 2015.

  32. Fan Wang, Xiao Jiang, and Xiao Peng Hu. A tbb-cuda implementation for back- ground removal in a video-based fire detection system. Mathematical Problems in Engineering, 2014, 2014.

  33. Jiaqi Wang, Kai Chen, Shuo Yang, Chen Change Loy, and Dahua Lin. Region proposal by guided anchoring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 29652974, 2019.

  34. Fan Wu, Yongyi Cui, Fang Qu, and Lei Mai. Experimental study on fire extin- guishing characteristics of automatic sprinkler system. In 2015 Sixth International Conference on Intelligent Systems Design and Engineering Applications (ISDEA), pages 389392. IEEE, 2015.

  35. Feiniu Yuan. A fast accumulative motion orientation model based on integral image for video smoke detection. Pattern Recognition Letters, 29(7):925932, 2008.

  36. Feiniu Yuan, Zhijun Fang, Shiqian Wu, Yong Yang, and Yuming Fang. Real-time image smoke detection using staircase searching-based dual threshold adaboost and dynamic analysis. IET Image Processing, 9(10):849856, 2015.

Leave a Reply

Your email address will not be published. Required fields are marked *